00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 177 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3678 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.071 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.072 The recommended git tool is: git 00:00:00.072 using credential 00000000-0000-0000-0000-000000000002 00:00:00.073 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.103 Fetching changes from the remote Git repository 00:00:00.106 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.144 Using shallow fetch with depth 1 00:00:00.144 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.144 > git --version # timeout=10 00:00:00.184 > git --version # 'git version 2.39.2' 00:00:00.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.217 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.217 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.171 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.182 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.194 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.194 > git config core.sparsecheckout # timeout=10 00:00:06.203 > git read-tree -mu HEAD # timeout=10 00:00:06.219 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.240 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.241 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.324 [Pipeline] Start of Pipeline 00:00:06.337 [Pipeline] library 00:00:06.338 Loading library shm_lib@master 00:00:06.338 Library shm_lib@master is cached. Copying from home. 00:00:06.355 [Pipeline] node 00:00:06.378 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.379 [Pipeline] { 00:00:06.391 [Pipeline] catchError 00:00:06.393 [Pipeline] { 00:00:06.405 [Pipeline] wrap 00:00:06.413 [Pipeline] { 00:00:06.421 [Pipeline] stage 00:00:06.422 [Pipeline] { (Prologue) 00:00:06.438 [Pipeline] echo 00:00:06.439 Node: VM-host-SM38 00:00:06.444 [Pipeline] cleanWs 00:00:06.452 [WS-CLEANUP] Deleting project workspace... 00:00:06.452 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.459 [WS-CLEANUP] done 00:00:06.640 [Pipeline] setCustomBuildProperty 00:00:06.708 [Pipeline] httpRequest 00:00:06.994 [Pipeline] echo 00:00:06.996 Sorcerer 10.211.164.20 is alive 00:00:07.005 [Pipeline] retry 00:00:07.007 [Pipeline] { 00:00:07.022 [Pipeline] httpRequest 00:00:07.028 HttpMethod: GET 00:00:07.028 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.029 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.038 Response Code: HTTP/1.1 200 OK 00:00:07.039 Success: Status code 200 is in the accepted range: 200,404 00:00:07.039 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.653 [Pipeline] } 00:00:09.668 [Pipeline] // retry 00:00:09.677 [Pipeline] sh 00:00:09.963 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.982 [Pipeline] httpRequest 00:00:10.312 [Pipeline] echo 00:00:10.313 Sorcerer 10.211.164.20 is alive 00:00:10.321 [Pipeline] retry 00:00:10.322 [Pipeline] { 00:00:10.333 [Pipeline] httpRequest 00:00:10.338 HttpMethod: GET 00:00:10.338 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:10.339 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:10.350 Response Code: HTTP/1.1 200 OK 00:00:10.350 Success: Status code 200 is in the accepted range: 200,404 00:00:10.351 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:54.585 [Pipeline] } 00:01:54.603 [Pipeline] // retry 00:01:54.614 [Pipeline] sh 00:01:54.899 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:57.451 [Pipeline] sh 00:01:57.735 + git -C spdk log --oneline -n5 00:01:57.735 b18e1bd62 version: v24.09.1-pre 00:01:57.735 19524ad45 version: v24.09 00:01:57.735 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:57.735 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:57.735 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:57.760 [Pipeline] withCredentials 00:01:57.773 > git --version # timeout=10 00:01:57.784 > git --version # 'git version 2.39.2' 00:01:57.804 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:57.807 [Pipeline] { 00:01:57.819 [Pipeline] retry 00:01:57.822 [Pipeline] { 00:01:57.840 [Pipeline] sh 00:01:58.125 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:58.139 [Pipeline] } 00:01:58.157 [Pipeline] // retry 00:01:58.162 [Pipeline] } 00:01:58.178 [Pipeline] // withCredentials 00:01:58.187 [Pipeline] httpRequest 00:01:58.571 [Pipeline] echo 00:01:58.573 Sorcerer 10.211.164.20 is alive 00:01:58.585 [Pipeline] retry 00:01:58.587 [Pipeline] { 00:01:58.603 [Pipeline] httpRequest 00:01:58.609 HttpMethod: GET 00:01:58.609 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:58.610 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:58.613 Response Code: HTTP/1.1 200 OK 00:01:58.613 Success: Status code 200 is in the accepted range: 200,404 00:01:58.614 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:08.227 [Pipeline] } 00:02:08.243 [Pipeline] // retry 00:02:08.251 [Pipeline] sh 00:02:08.527 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:09.909 [Pipeline] sh 00:02:10.183 + git -C dpdk log --oneline -n5 00:02:10.184 eeb0605f11 version: 23.11.0 00:02:10.184 238778122a doc: update release notes for 23.11 00:02:10.184 46aa6b3cfc doc: fix description of RSS features 00:02:10.184 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:10.184 7e421ae345 devtools: support skipping forbid rule check 00:02:10.199 [Pipeline] writeFile 00:02:10.212 [Pipeline] sh 00:02:10.490 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:10.500 [Pipeline] sh 00:02:10.775 + cat autorun-spdk.conf 00:02:10.775 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.775 SPDK_TEST_NVME=1 00:02:10.775 SPDK_TEST_FTL=1 00:02:10.775 SPDK_TEST_ISAL=1 00:02:10.775 SPDK_RUN_ASAN=1 00:02:10.775 SPDK_RUN_UBSAN=1 00:02:10.775 SPDK_TEST_XNVME=1 00:02:10.775 SPDK_TEST_NVME_FDP=1 00:02:10.775 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:10.775 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:10.776 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.781 RUN_NIGHTLY=1 00:02:10.784 [Pipeline] } 00:02:10.797 [Pipeline] // stage 00:02:10.813 [Pipeline] stage 00:02:10.816 [Pipeline] { (Run VM) 00:02:10.828 [Pipeline] sh 00:02:11.105 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:11.105 + echo 'Start stage prepare_nvme.sh' 00:02:11.105 Start stage prepare_nvme.sh 00:02:11.105 + [[ -n 6 ]] 00:02:11.105 + disk_prefix=ex6 00:02:11.105 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:11.105 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:11.106 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:11.106 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.106 ++ SPDK_TEST_NVME=1 00:02:11.106 ++ SPDK_TEST_FTL=1 00:02:11.106 ++ SPDK_TEST_ISAL=1 00:02:11.106 ++ SPDK_RUN_ASAN=1 00:02:11.106 ++ SPDK_RUN_UBSAN=1 00:02:11.106 ++ SPDK_TEST_XNVME=1 00:02:11.106 ++ SPDK_TEST_NVME_FDP=1 00:02:11.106 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:11.106 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:11.106 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.106 ++ RUN_NIGHTLY=1 00:02:11.106 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:11.106 + nvme_files=() 00:02:11.106 + declare -A nvme_files 00:02:11.106 + backend_dir=/var/lib/libvirt/images/backends 00:02:11.106 + nvme_files['nvme.img']=5G 00:02:11.106 + nvme_files['nvme-cmb.img']=5G 00:02:11.106 + nvme_files['nvme-multi0.img']=4G 00:02:11.106 + nvme_files['nvme-multi1.img']=4G 00:02:11.106 + nvme_files['nvme-multi2.img']=4G 00:02:11.106 + nvme_files['nvme-openstack.img']=8G 00:02:11.106 + nvme_files['nvme-zns.img']=5G 00:02:11.106 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:11.106 + (( SPDK_TEST_FTL == 1 )) 00:02:11.106 + nvme_files["nvme-ftl.img"]=6G 00:02:11.106 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:11.106 + nvme_files["nvme-fdp.img"]=1G 00:02:11.106 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:11.106 + for nvme in "${!nvme_files[@]}" 00:02:11.106 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:02:11.106 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:11.106 + for nvme in "${!nvme_files[@]}" 00:02:11.106 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:02:11.106 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:11.106 + for nvme in "${!nvme_files[@]}" 00:02:11.106 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:02:11.106 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:11.106 + for nvme in "${!nvme_files[@]}" 00:02:11.106 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:02:11.365 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:11.365 + for nvme in "${!nvme_files[@]}" 00:02:11.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:02:11.365 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:11.365 + for nvme in "${!nvme_files[@]}" 00:02:11.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:02:11.365 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:11.365 + for nvme in "${!nvme_files[@]}" 00:02:11.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:02:11.365 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:11.365 + for nvme in "${!nvme_files[@]}" 00:02:11.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:02:11.365 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:11.365 + for nvme in "${!nvme_files[@]}" 00:02:11.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:02:11.365 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:11.365 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:02:11.365 + echo 'End stage prepare_nvme.sh' 00:02:11.365 End stage prepare_nvme.sh 00:02:11.376 [Pipeline] sh 00:02:11.653 + DISTRO=fedora39 00:02:11.653 + CPUS=10 00:02:11.653 + RAM=12288 00:02:11.653 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:11.653 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:11.653 00:02:11.653 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:11.653 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:11.653 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:11.653 HELP=0 00:02:11.653 DRY_RUN=0 00:02:11.653 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:02:11.653 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:11.653 NVME_AUTO_CREATE=0 00:02:11.653 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:02:11.653 NVME_CMB=,,,, 00:02:11.653 NVME_PMR=,,,, 00:02:11.653 NVME_ZNS=,,,, 00:02:11.653 NVME_MS=true,,,, 00:02:11.653 NVME_FDP=,,,on, 00:02:11.653 SPDK_VAGRANT_DISTRO=fedora39 00:02:11.653 SPDK_VAGRANT_VMCPU=10 00:02:11.653 SPDK_VAGRANT_VMRAM=12288 00:02:11.653 SPDK_VAGRANT_PROVIDER=libvirt 00:02:11.653 SPDK_VAGRANT_HTTP_PROXY= 00:02:11.653 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:11.654 SPDK_OPENSTACK_NETWORK=0 00:02:11.654 VAGRANT_PACKAGE_BOX=0 00:02:11.654 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:11.654 FORCE_DISTRO=true 00:02:11.654 VAGRANT_BOX_VERSION= 00:02:11.654 EXTRA_VAGRANTFILES= 00:02:11.654 NIC_MODEL=e1000 00:02:11.654 00:02:11.654 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:11.654 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:14.181 Bringing machine 'default' up with 'libvirt' provider... 00:02:14.181 ==> default: Creating image (snapshot of base box volume). 00:02:14.181 ==> default: Creating domain with the following settings... 00:02:14.181 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732889395_351b086b73a9a0125374 00:02:14.181 ==> default: -- Domain type: kvm 00:02:14.181 ==> default: -- Cpus: 10 00:02:14.181 ==> default: -- Feature: acpi 00:02:14.181 ==> default: -- Feature: apic 00:02:14.181 ==> default: -- Feature: pae 00:02:14.181 ==> default: -- Memory: 12288M 00:02:14.181 ==> default: -- Memory Backing: hugepages: 00:02:14.181 ==> default: -- Management MAC: 00:02:14.182 ==> default: -- Loader: 00:02:14.182 ==> default: -- Nvram: 00:02:14.182 ==> default: -- Base box: spdk/fedora39 00:02:14.182 ==> default: -- Storage pool: default 00:02:14.182 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732889395_351b086b73a9a0125374.img (20G) 00:02:14.182 ==> default: -- Volume Cache: default 00:02:14.182 ==> default: -- Kernel: 00:02:14.182 ==> default: -- Initrd: 00:02:14.182 ==> default: -- Graphics Type: vnc 00:02:14.182 ==> default: -- Graphics Port: -1 00:02:14.182 ==> default: -- Graphics IP: 127.0.0.1 00:02:14.182 ==> default: -- Graphics Password: Not defined 00:02:14.182 ==> default: -- Video Type: cirrus 00:02:14.182 ==> default: -- Video VRAM: 9216 00:02:14.182 ==> default: -- Sound Type: 00:02:14.182 ==> default: -- Keymap: en-us 00:02:14.182 ==> default: -- TPM Path: 00:02:14.182 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:14.182 ==> default: -- Command line args: 00:02:14.182 ==> default: -> value=-device, 00:02:14.182 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:14.182 ==> default: -> value=-drive, 00:02:14.182 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:14.182 ==> default: -> value=-device, 00:02:14.182 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:14.182 ==> default: -> value=-device, 00:02:14.182 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:14.182 ==> default: -> value=-drive, 00:02:14.182 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:02:14.182 ==> default: -> value=-device, 00:02:14.182 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.182 ==> default: -> value=-device, 00:02:14.182 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:14.182 ==> default: -> value=-drive, 00:02:14.182 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:14.182 ==> default: -> value=-device, 00:02:14.182 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.182 ==> default: -> value=-drive, 00:02:14.182 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:14.182 ==> default: -> value=-device, 00:02:14.182 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.182 ==> default: -> value=-drive, 00:02:14.182 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:14.182 ==> default: -> value=-device, 00:02:14.182 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.182 ==> default: -> value=-device, 00:02:14.182 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:14.182 ==> default: -> value=-device, 00:02:14.182 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:14.182 ==> default: -> value=-drive, 00:02:14.182 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:14.182 ==> default: -> value=-device, 00:02:14.182 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:14.483 ==> default: Creating shared folders metadata... 00:02:14.483 ==> default: Starting domain. 00:02:15.439 ==> default: Waiting for domain to get an IP address... 00:02:27.635 ==> default: Waiting for SSH to become available... 00:02:29.003 ==> default: Configuring and enabling network interfaces... 00:02:33.191 default: SSH address: 192.168.121.205:22 00:02:33.191 default: SSH username: vagrant 00:02:33.191 default: SSH auth method: private key 00:02:35.100 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:43.236 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:48.530 ==> default: Mounting SSHFS shared folder... 00:02:50.453 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:50.453 ==> default: Checking Mount.. 00:02:51.397 ==> default: Folder Successfully Mounted! 00:02:51.397 00:02:51.397 SUCCESS! 00:02:51.397 00:02:51.397 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:51.397 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:51.397 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:51.397 00:02:51.407 [Pipeline] } 00:02:51.422 [Pipeline] // stage 00:02:51.430 [Pipeline] dir 00:02:51.430 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:51.432 [Pipeline] { 00:02:51.443 [Pipeline] catchError 00:02:51.445 [Pipeline] { 00:02:51.457 [Pipeline] sh 00:02:51.740 + vagrant ssh-config --host vagrant 00:02:51.741 + sed -ne '/^Host/,$p' 00:02:51.741 + tee ssh_conf 00:02:54.283 Host vagrant 00:02:54.283 HostName 192.168.121.205 00:02:54.283 User vagrant 00:02:54.283 Port 22 00:02:54.283 UserKnownHostsFile /dev/null 00:02:54.283 StrictHostKeyChecking no 00:02:54.283 PasswordAuthentication no 00:02:54.283 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:54.283 IdentitiesOnly yes 00:02:54.283 LogLevel FATAL 00:02:54.283 ForwardAgent yes 00:02:54.283 ForwardX11 yes 00:02:54.283 00:02:54.298 [Pipeline] withEnv 00:02:54.300 [Pipeline] { 00:02:54.314 [Pipeline] sh 00:02:54.608 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:54.608 source /etc/os-release 00:02:54.608 [[ -e /image.version ]] && img=$(< /image.version) 00:02:54.608 # Minimal, systemd-like check. 00:02:54.608 if [[ -e /.dockerenv ]]; then 00:02:54.608 # Clear garbage from the node'\''s name: 00:02:54.608 # agt-er_autotest_547-896 -> autotest_547-896 00:02:54.608 # $HOSTNAME is the actual container id 00:02:54.608 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:54.608 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:54.608 # We can assume this is a mount from a host where container is running, 00:02:54.608 # so fetch its hostname to easily identify the target swarm worker. 00:02:54.608 container="$(< /etc/hostname) ($agent)" 00:02:54.608 else 00:02:54.608 # Fallback 00:02:54.608 container=$agent 00:02:54.608 fi 00:02:54.608 fi 00:02:54.608 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:54.608 ' 00:02:54.620 [Pipeline] } 00:02:54.641 [Pipeline] // withEnv 00:02:54.648 [Pipeline] setCustomBuildProperty 00:02:54.661 [Pipeline] stage 00:02:54.662 [Pipeline] { (Tests) 00:02:54.681 [Pipeline] sh 00:02:54.963 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:54.977 [Pipeline] sh 00:02:55.258 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:55.531 [Pipeline] timeout 00:02:55.532 Timeout set to expire in 50 min 00:02:55.534 [Pipeline] { 00:02:55.548 [Pipeline] sh 00:02:55.829 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:56.090 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:56.104 [Pipeline] sh 00:02:56.387 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:56.659 [Pipeline] sh 00:02:56.941 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:56.959 [Pipeline] sh 00:02:57.237 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:57.238 ++ readlink -f spdk_repo 00:02:57.238 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:57.238 + [[ -n /home/vagrant/spdk_repo ]] 00:02:57.238 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:57.238 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:57.238 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:57.238 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:57.238 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:57.238 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:57.238 + cd /home/vagrant/spdk_repo 00:02:57.238 + source /etc/os-release 00:02:57.238 ++ NAME='Fedora Linux' 00:02:57.238 ++ VERSION='39 (Cloud Edition)' 00:02:57.238 ++ ID=fedora 00:02:57.238 ++ VERSION_ID=39 00:02:57.238 ++ VERSION_CODENAME= 00:02:57.238 ++ PLATFORM_ID=platform:f39 00:02:57.238 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:57.238 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:57.238 ++ LOGO=fedora-logo-icon 00:02:57.238 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:57.238 ++ HOME_URL=https://fedoraproject.org/ 00:02:57.238 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:57.238 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:57.238 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:57.238 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:57.238 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:57.238 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:57.238 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:57.238 ++ SUPPORT_END=2024-11-12 00:02:57.238 ++ VARIANT='Cloud Edition' 00:02:57.238 ++ VARIANT_ID=cloud 00:02:57.238 + uname -a 00:02:57.238 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:57.238 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:57.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:57.804 Hugepages 00:02:57.804 node hugesize free / total 00:02:57.804 node0 1048576kB 0 / 0 00:02:57.804 node0 2048kB 0 / 0 00:02:57.804 00:02:57.804 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:57.804 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:57.804 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:58.063 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:58.063 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:58.063 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:58.063 + rm -f /tmp/spdk-ld-path 00:02:58.063 + source autorun-spdk.conf 00:02:58.063 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.063 ++ SPDK_TEST_NVME=1 00:02:58.063 ++ SPDK_TEST_FTL=1 00:02:58.063 ++ SPDK_TEST_ISAL=1 00:02:58.063 ++ SPDK_RUN_ASAN=1 00:02:58.063 ++ SPDK_RUN_UBSAN=1 00:02:58.063 ++ SPDK_TEST_XNVME=1 00:02:58.063 ++ SPDK_TEST_NVME_FDP=1 00:02:58.063 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:58.063 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:58.063 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:58.063 ++ RUN_NIGHTLY=1 00:02:58.063 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:58.063 + [[ -n '' ]] 00:02:58.063 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:58.063 + for M in /var/spdk/build-*-manifest.txt 00:02:58.063 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:58.063 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:58.063 + for M in /var/spdk/build-*-manifest.txt 00:02:58.063 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:58.063 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:58.063 + for M in /var/spdk/build-*-manifest.txt 00:02:58.063 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:58.063 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:58.063 ++ uname 00:02:58.063 + [[ Linux == \L\i\n\u\x ]] 00:02:58.063 + sudo dmesg -T 00:02:58.063 + sudo dmesg --clear 00:02:58.063 + dmesg_pid=5764 00:02:58.063 + [[ Fedora Linux == FreeBSD ]] 00:02:58.063 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:58.063 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:58.063 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:58.063 + [[ -x /usr/src/fio-static/fio ]] 00:02:58.063 + sudo dmesg -Tw 00:02:58.063 + export FIO_BIN=/usr/src/fio-static/fio 00:02:58.063 + FIO_BIN=/usr/src/fio-static/fio 00:02:58.063 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:58.063 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:58.063 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:58.063 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:58.063 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:58.063 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:58.063 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:58.063 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:58.063 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:58.063 Test configuration: 00:02:58.063 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.063 SPDK_TEST_NVME=1 00:02:58.063 SPDK_TEST_FTL=1 00:02:58.063 SPDK_TEST_ISAL=1 00:02:58.063 SPDK_RUN_ASAN=1 00:02:58.063 SPDK_RUN_UBSAN=1 00:02:58.063 SPDK_TEST_XNVME=1 00:02:58.063 SPDK_TEST_NVME_FDP=1 00:02:58.063 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:58.063 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:58.063 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:58.063 RUN_NIGHTLY=1 14:10:39 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:58.063 14:10:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:58.063 14:10:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:58.063 14:10:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:58.063 14:10:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:58.063 14:10:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:58.064 14:10:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.064 14:10:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.064 14:10:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.064 14:10:39 -- paths/export.sh@5 -- $ export PATH 00:02:58.064 14:10:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.064 14:10:39 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:58.064 14:10:39 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:58.064 14:10:39 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732889439.XXXXXX 00:02:58.064 14:10:39 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732889439.FWw5BE 00:02:58.064 14:10:39 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:58.064 14:10:39 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:58.064 14:10:39 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:58.064 14:10:39 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:58.064 14:10:39 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:58.064 14:10:39 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:58.064 14:10:39 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:58.064 14:10:39 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:58.064 14:10:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.064 14:10:39 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:58.064 14:10:39 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:58.064 14:10:39 -- pm/common@17 -- $ local monitor 00:02:58.064 14:10:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.064 14:10:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.064 14:10:39 -- pm/common@25 -- $ sleep 1 00:02:58.064 14:10:39 -- pm/common@21 -- $ date +%s 00:02:58.064 14:10:39 -- pm/common@21 -- $ date +%s 00:02:58.064 14:10:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732889439 00:02:58.064 14:10:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732889439 00:02:58.323 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732889439_collect-cpu-load.pm.log 00:02:58.323 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732889439_collect-vmstat.pm.log 00:02:59.261 14:10:40 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:59.261 14:10:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:59.261 14:10:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:59.261 14:10:40 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:59.261 14:10:40 -- spdk/autobuild.sh@16 -- $ date -u 00:02:59.261 Fri Nov 29 02:10:40 PM UTC 2024 00:02:59.261 14:10:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:59.261 v24.09-1-gb18e1bd62 00:02:59.261 14:10:40 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:59.261 14:10:40 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:59.261 14:10:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:59.261 14:10:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:59.261 14:10:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.261 ************************************ 00:02:59.261 START TEST asan 00:02:59.261 ************************************ 00:02:59.261 using asan 00:02:59.261 14:10:40 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:59.261 00:02:59.261 real 0m0.000s 00:02:59.261 user 0m0.000s 00:02:59.261 sys 0m0.000s 00:02:59.261 14:10:40 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:59.261 14:10:40 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.261 ************************************ 00:02:59.261 END TEST asan 00:02:59.261 ************************************ 00:02:59.261 14:10:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:59.261 14:10:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:59.261 14:10:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:59.261 14:10:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:59.261 14:10:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.261 ************************************ 00:02:59.261 START TEST ubsan 00:02:59.261 ************************************ 00:02:59.261 using ubsan 00:02:59.261 14:10:40 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:59.261 00:02:59.261 real 0m0.000s 00:02:59.261 user 0m0.000s 00:02:59.261 sys 0m0.000s 00:02:59.261 14:10:40 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:59.261 14:10:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.261 ************************************ 00:02:59.261 END TEST ubsan 00:02:59.261 ************************************ 00:02:59.261 14:10:40 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:59.261 14:10:40 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:59.261 14:10:40 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:59.261 14:10:40 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:59.261 14:10:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:59.261 14:10:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.261 ************************************ 00:02:59.261 START TEST build_native_dpdk 00:02:59.261 ************************************ 00:02:59.261 14:10:40 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:59.261 eeb0605f11 version: 23.11.0 00:02:59.261 238778122a doc: update release notes for 23.11 00:02:59.261 46aa6b3cfc doc: fix description of RSS features 00:02:59.261 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:59.261 7e421ae345 devtools: support skipping forbid rule check 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:59.261 14:10:40 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:59.261 14:10:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:59.262 14:10:40 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:59.262 patching file config/rte_config.h 00:02:59.262 Hunk #1 succeeded at 60 (offset 1 line). 00:02:59.262 14:10:40 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:59.262 14:10:40 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:59.262 patching file lib/pcapng/rte_pcapng.c 00:02:59.262 14:10:40 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:59.262 14:10:40 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:59.262 14:10:40 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:59.262 14:10:40 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:59.262 14:10:40 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:59.262 14:10:40 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:59.262 14:10:40 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:03.496 The Meson build system 00:03:03.496 Version: 1.5.0 00:03:03.496 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:03.496 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:03.496 Build type: native build 00:03:03.496 Program cat found: YES (/usr/bin/cat) 00:03:03.496 Project name: DPDK 00:03:03.496 Project version: 23.11.0 00:03:03.496 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:03.496 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:03.496 Host machine cpu family: x86_64 00:03:03.496 Host machine cpu: x86_64 00:03:03.496 Message: ## Building in Developer Mode ## 00:03:03.496 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:03.496 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:03.496 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:03.496 Program python3 found: YES (/usr/bin/python3) 00:03:03.496 Program cat found: YES (/usr/bin/cat) 00:03:03.496 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:03.496 Compiler for C supports arguments -march=native: YES 00:03:03.496 Checking for size of "void *" : 8 00:03:03.496 Checking for size of "void *" : 8 (cached) 00:03:03.496 Library m found: YES 00:03:03.496 Library numa found: YES 00:03:03.496 Has header "numaif.h" : YES 00:03:03.496 Library fdt found: NO 00:03:03.496 Library execinfo found: NO 00:03:03.496 Has header "execinfo.h" : YES 00:03:03.496 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:03.496 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:03.496 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:03.496 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:03.496 Run-time dependency openssl found: YES 3.1.1 00:03:03.496 Run-time dependency libpcap found: YES 1.10.4 00:03:03.496 Has header "pcap.h" with dependency libpcap: YES 00:03:03.496 Compiler for C supports arguments -Wcast-qual: YES 00:03:03.496 Compiler for C supports arguments -Wdeprecated: YES 00:03:03.496 Compiler for C supports arguments -Wformat: YES 00:03:03.496 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:03.496 Compiler for C supports arguments -Wformat-security: NO 00:03:03.496 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:03.496 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:03.496 Compiler for C supports arguments -Wnested-externs: YES 00:03:03.496 Compiler for C supports arguments -Wold-style-definition: YES 00:03:03.496 Compiler for C supports arguments -Wpointer-arith: YES 00:03:03.497 Compiler for C supports arguments -Wsign-compare: YES 00:03:03.497 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:03.497 Compiler for C supports arguments -Wundef: YES 00:03:03.497 Compiler for C supports arguments -Wwrite-strings: YES 00:03:03.497 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:03.497 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:03.497 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:03.497 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:03.497 Program objdump found: YES (/usr/bin/objdump) 00:03:03.497 Compiler for C supports arguments -mavx512f: YES 00:03:03.497 Checking if "AVX512 checking" compiles: YES 00:03:03.497 Fetching value of define "__SSE4_2__" : 1 00:03:03.497 Fetching value of define "__AES__" : 1 00:03:03.497 Fetching value of define "__AVX__" : 1 00:03:03.497 Fetching value of define "__AVX2__" : 1 00:03:03.497 Fetching value of define "__AVX512BW__" : 1 00:03:03.497 Fetching value of define "__AVX512CD__" : 1 00:03:03.497 Fetching value of define "__AVX512DQ__" : 1 00:03:03.497 Fetching value of define "__AVX512F__" : 1 00:03:03.497 Fetching value of define "__AVX512VL__" : 1 00:03:03.497 Fetching value of define "__PCLMUL__" : 1 00:03:03.497 Fetching value of define "__RDRND__" : 1 00:03:03.497 Fetching value of define "__RDSEED__" : 1 00:03:03.497 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:03.497 Fetching value of define "__znver1__" : (undefined) 00:03:03.497 Fetching value of define "__znver2__" : (undefined) 00:03:03.497 Fetching value of define "__znver3__" : (undefined) 00:03:03.497 Fetching value of define "__znver4__" : (undefined) 00:03:03.497 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:03.497 Message: lib/log: Defining dependency "log" 00:03:03.497 Message: lib/kvargs: Defining dependency "kvargs" 00:03:03.497 Message: lib/telemetry: Defining dependency "telemetry" 00:03:03.497 Checking for function "getentropy" : NO 00:03:03.497 Message: lib/eal: Defining dependency "eal" 00:03:03.497 Message: lib/ring: Defining dependency "ring" 00:03:03.497 Message: lib/rcu: Defining dependency "rcu" 00:03:03.497 Message: lib/mempool: Defining dependency "mempool" 00:03:03.497 Message: lib/mbuf: Defining dependency "mbuf" 00:03:03.497 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:03.497 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:03.497 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:03.497 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:03.497 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:03.497 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:03.497 Compiler for C supports arguments -mpclmul: YES 00:03:03.497 Compiler for C supports arguments -maes: YES 00:03:03.497 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:03.497 Compiler for C supports arguments -mavx512bw: YES 00:03:03.497 Compiler for C supports arguments -mavx512dq: YES 00:03:03.497 Compiler for C supports arguments -mavx512vl: YES 00:03:03.497 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:03.497 Compiler for C supports arguments -mavx2: YES 00:03:03.497 Compiler for C supports arguments -mavx: YES 00:03:03.497 Message: lib/net: Defining dependency "net" 00:03:03.497 Message: lib/meter: Defining dependency "meter" 00:03:03.497 Message: lib/ethdev: Defining dependency "ethdev" 00:03:03.497 Message: lib/pci: Defining dependency "pci" 00:03:03.497 Message: lib/cmdline: Defining dependency "cmdline" 00:03:03.497 Message: lib/metrics: Defining dependency "metrics" 00:03:03.497 Message: lib/hash: Defining dependency "hash" 00:03:03.497 Message: lib/timer: Defining dependency "timer" 00:03:03.497 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:03.497 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:03.497 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:03.497 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:03.497 Message: lib/acl: Defining dependency "acl" 00:03:03.497 Message: lib/bbdev: Defining dependency "bbdev" 00:03:03.497 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:03.497 Run-time dependency libelf found: YES 0.191 00:03:03.497 Message: lib/bpf: Defining dependency "bpf" 00:03:03.497 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:03.497 Message: lib/compressdev: Defining dependency "compressdev" 00:03:03.497 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:03.497 Message: lib/distributor: Defining dependency "distributor" 00:03:03.497 Message: lib/dmadev: Defining dependency "dmadev" 00:03:03.497 Message: lib/efd: Defining dependency "efd" 00:03:03.497 Message: lib/eventdev: Defining dependency "eventdev" 00:03:03.497 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:03.497 Message: lib/gpudev: Defining dependency "gpudev" 00:03:03.497 Message: lib/gro: Defining dependency "gro" 00:03:03.497 Message: lib/gso: Defining dependency "gso" 00:03:03.497 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:03.497 Message: lib/jobstats: Defining dependency "jobstats" 00:03:03.497 Message: lib/latencystats: Defining dependency "latencystats" 00:03:03.497 Message: lib/lpm: Defining dependency "lpm" 00:03:03.497 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:03.497 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:03.497 Fetching value of define "__AVX512IFMA__" : 1 00:03:03.497 Message: lib/member: Defining dependency "member" 00:03:03.497 Message: lib/pcapng: Defining dependency "pcapng" 00:03:03.497 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:03.497 Message: lib/power: Defining dependency "power" 00:03:03.497 Message: lib/rawdev: Defining dependency "rawdev" 00:03:03.497 Message: lib/regexdev: Defining dependency "regexdev" 00:03:03.497 Message: lib/mldev: Defining dependency "mldev" 00:03:03.497 Message: lib/rib: Defining dependency "rib" 00:03:03.497 Message: lib/reorder: Defining dependency "reorder" 00:03:03.497 Message: lib/sched: Defining dependency "sched" 00:03:03.497 Message: lib/security: Defining dependency "security" 00:03:03.497 Message: lib/stack: Defining dependency "stack" 00:03:03.497 Has header "linux/userfaultfd.h" : YES 00:03:03.497 Has header "linux/vduse.h" : YES 00:03:03.497 Message: lib/vhost: Defining dependency "vhost" 00:03:03.497 Message: lib/ipsec: Defining dependency "ipsec" 00:03:03.497 Message: lib/pdcp: Defining dependency "pdcp" 00:03:03.497 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:03.497 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:03.497 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:03.497 Message: lib/fib: Defining dependency "fib" 00:03:03.497 Message: lib/port: Defining dependency "port" 00:03:03.497 Message: lib/pdump: Defining dependency "pdump" 00:03:03.497 Message: lib/table: Defining dependency "table" 00:03:03.497 Message: lib/pipeline: Defining dependency "pipeline" 00:03:03.497 Message: lib/graph: Defining dependency "graph" 00:03:03.497 Message: lib/node: Defining dependency "node" 00:03:03.497 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:03.497 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:03.497 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:03.497 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:04.447 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:04.447 Compiler for C supports arguments -Wno-unused-value: YES 00:03:04.447 Compiler for C supports arguments -Wno-format: YES 00:03:04.447 Compiler for C supports arguments -Wno-format-security: YES 00:03:04.447 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:04.447 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:04.447 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:04.447 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:04.447 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:04.447 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:04.447 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:04.447 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:04.447 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:04.447 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:04.447 Has header "sys/epoll.h" : YES 00:03:04.447 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:04.447 Configuring doxy-api-html.conf using configuration 00:03:04.447 Configuring doxy-api-man.conf using configuration 00:03:04.447 Program mandb found: YES (/usr/bin/mandb) 00:03:04.447 Program sphinx-build found: NO 00:03:04.447 Configuring rte_build_config.h using configuration 00:03:04.447 Message: 00:03:04.447 ================= 00:03:04.447 Applications Enabled 00:03:04.447 ================= 00:03:04.447 00:03:04.447 apps: 00:03:04.447 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:04.447 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:04.447 test-pmd, test-regex, test-sad, test-security-perf, 00:03:04.447 00:03:04.447 Message: 00:03:04.447 ================= 00:03:04.447 Libraries Enabled 00:03:04.447 ================= 00:03:04.447 00:03:04.447 libs: 00:03:04.447 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:04.447 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:04.447 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:04.447 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:04.447 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:04.447 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:04.447 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:04.447 00:03:04.447 00:03:04.447 Message: 00:03:04.447 =============== 00:03:04.447 Drivers Enabled 00:03:04.447 =============== 00:03:04.447 00:03:04.447 common: 00:03:04.447 00:03:04.447 bus: 00:03:04.447 pci, vdev, 00:03:04.447 mempool: 00:03:04.447 ring, 00:03:04.447 dma: 00:03:04.447 00:03:04.447 net: 00:03:04.447 i40e, 00:03:04.447 raw: 00:03:04.447 00:03:04.447 crypto: 00:03:04.447 00:03:04.447 compress: 00:03:04.447 00:03:04.447 regex: 00:03:04.447 00:03:04.447 ml: 00:03:04.447 00:03:04.447 vdpa: 00:03:04.447 00:03:04.447 event: 00:03:04.447 00:03:04.447 baseband: 00:03:04.447 00:03:04.447 gpu: 00:03:04.447 00:03:04.447 00:03:04.447 Message: 00:03:04.447 ================= 00:03:04.447 Content Skipped 00:03:04.447 ================= 00:03:04.447 00:03:04.447 apps: 00:03:04.447 00:03:04.447 libs: 00:03:04.447 00:03:04.447 drivers: 00:03:04.447 common/cpt: not in enabled drivers build config 00:03:04.447 common/dpaax: not in enabled drivers build config 00:03:04.448 common/iavf: not in enabled drivers build config 00:03:04.448 common/idpf: not in enabled drivers build config 00:03:04.448 common/mvep: not in enabled drivers build config 00:03:04.448 common/octeontx: not in enabled drivers build config 00:03:04.448 bus/auxiliary: not in enabled drivers build config 00:03:04.448 bus/cdx: not in enabled drivers build config 00:03:04.448 bus/dpaa: not in enabled drivers build config 00:03:04.448 bus/fslmc: not in enabled drivers build config 00:03:04.448 bus/ifpga: not in enabled drivers build config 00:03:04.448 bus/platform: not in enabled drivers build config 00:03:04.448 bus/vmbus: not in enabled drivers build config 00:03:04.448 common/cnxk: not in enabled drivers build config 00:03:04.448 common/mlx5: not in enabled drivers build config 00:03:04.448 common/nfp: not in enabled drivers build config 00:03:04.448 common/qat: not in enabled drivers build config 00:03:04.448 common/sfc_efx: not in enabled drivers build config 00:03:04.448 mempool/bucket: not in enabled drivers build config 00:03:04.448 mempool/cnxk: not in enabled drivers build config 00:03:04.448 mempool/dpaa: not in enabled drivers build config 00:03:04.448 mempool/dpaa2: not in enabled drivers build config 00:03:04.448 mempool/octeontx: not in enabled drivers build config 00:03:04.448 mempool/stack: not in enabled drivers build config 00:03:04.448 dma/cnxk: not in enabled drivers build config 00:03:04.448 dma/dpaa: not in enabled drivers build config 00:03:04.448 dma/dpaa2: not in enabled drivers build config 00:03:04.448 dma/hisilicon: not in enabled drivers build config 00:03:04.448 dma/idxd: not in enabled drivers build config 00:03:04.448 dma/ioat: not in enabled drivers build config 00:03:04.448 dma/skeleton: not in enabled drivers build config 00:03:04.448 net/af_packet: not in enabled drivers build config 00:03:04.448 net/af_xdp: not in enabled drivers build config 00:03:04.448 net/ark: not in enabled drivers build config 00:03:04.448 net/atlantic: not in enabled drivers build config 00:03:04.448 net/avp: not in enabled drivers build config 00:03:04.448 net/axgbe: not in enabled drivers build config 00:03:04.448 net/bnx2x: not in enabled drivers build config 00:03:04.448 net/bnxt: not in enabled drivers build config 00:03:04.448 net/bonding: not in enabled drivers build config 00:03:04.448 net/cnxk: not in enabled drivers build config 00:03:04.448 net/cpfl: not in enabled drivers build config 00:03:04.448 net/cxgbe: not in enabled drivers build config 00:03:04.448 net/dpaa: not in enabled drivers build config 00:03:04.448 net/dpaa2: not in enabled drivers build config 00:03:04.448 net/e1000: not in enabled drivers build config 00:03:04.448 net/ena: not in enabled drivers build config 00:03:04.448 net/enetc: not in enabled drivers build config 00:03:04.448 net/enetfec: not in enabled drivers build config 00:03:04.448 net/enic: not in enabled drivers build config 00:03:04.448 net/failsafe: not in enabled drivers build config 00:03:04.448 net/fm10k: not in enabled drivers build config 00:03:04.448 net/gve: not in enabled drivers build config 00:03:04.448 net/hinic: not in enabled drivers build config 00:03:04.448 net/hns3: not in enabled drivers build config 00:03:04.448 net/iavf: not in enabled drivers build config 00:03:04.448 net/ice: not in enabled drivers build config 00:03:04.448 net/idpf: not in enabled drivers build config 00:03:04.448 net/igc: not in enabled drivers build config 00:03:04.448 net/ionic: not in enabled drivers build config 00:03:04.448 net/ipn3ke: not in enabled drivers build config 00:03:04.448 net/ixgbe: not in enabled drivers build config 00:03:04.448 net/mana: not in enabled drivers build config 00:03:04.448 net/memif: not in enabled drivers build config 00:03:04.448 net/mlx4: not in enabled drivers build config 00:03:04.448 net/mlx5: not in enabled drivers build config 00:03:04.448 net/mvneta: not in enabled drivers build config 00:03:04.448 net/mvpp2: not in enabled drivers build config 00:03:04.448 net/netvsc: not in enabled drivers build config 00:03:04.448 net/nfb: not in enabled drivers build config 00:03:04.448 net/nfp: not in enabled drivers build config 00:03:04.448 net/ngbe: not in enabled drivers build config 00:03:04.448 net/null: not in enabled drivers build config 00:03:04.448 net/octeontx: not in enabled drivers build config 00:03:04.448 net/octeon_ep: not in enabled drivers build config 00:03:04.448 net/pcap: not in enabled drivers build config 00:03:04.448 net/pfe: not in enabled drivers build config 00:03:04.448 net/qede: not in enabled drivers build config 00:03:04.448 net/ring: not in enabled drivers build config 00:03:04.448 net/sfc: not in enabled drivers build config 00:03:04.448 net/softnic: not in enabled drivers build config 00:03:04.448 net/tap: not in enabled drivers build config 00:03:04.448 net/thunderx: not in enabled drivers build config 00:03:04.448 net/txgbe: not in enabled drivers build config 00:03:04.448 net/vdev_netvsc: not in enabled drivers build config 00:03:04.448 net/vhost: not in enabled drivers build config 00:03:04.448 net/virtio: not in enabled drivers build config 00:03:04.448 net/vmxnet3: not in enabled drivers build config 00:03:04.448 raw/cnxk_bphy: not in enabled drivers build config 00:03:04.448 raw/cnxk_gpio: not in enabled drivers build config 00:03:04.448 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:04.448 raw/ifpga: not in enabled drivers build config 00:03:04.448 raw/ntb: not in enabled drivers build config 00:03:04.448 raw/skeleton: not in enabled drivers build config 00:03:04.448 crypto/armv8: not in enabled drivers build config 00:03:04.448 crypto/bcmfs: not in enabled drivers build config 00:03:04.448 crypto/caam_jr: not in enabled drivers build config 00:03:04.448 crypto/ccp: not in enabled drivers build config 00:03:04.448 crypto/cnxk: not in enabled drivers build config 00:03:04.448 crypto/dpaa_sec: not in enabled drivers build config 00:03:04.448 crypto/dpaa2_sec: not in enabled drivers build config 00:03:04.448 crypto/ipsec_mb: not in enabled drivers build config 00:03:04.448 crypto/mlx5: not in enabled drivers build config 00:03:04.448 crypto/mvsam: not in enabled drivers build config 00:03:04.448 crypto/nitrox: not in enabled drivers build config 00:03:04.448 crypto/null: not in enabled drivers build config 00:03:04.448 crypto/octeontx: not in enabled drivers build config 00:03:04.448 crypto/openssl: not in enabled drivers build config 00:03:04.448 crypto/scheduler: not in enabled drivers build config 00:03:04.448 crypto/uadk: not in enabled drivers build config 00:03:04.448 crypto/virtio: not in enabled drivers build config 00:03:04.448 compress/isal: not in enabled drivers build config 00:03:04.448 compress/mlx5: not in enabled drivers build config 00:03:04.448 compress/octeontx: not in enabled drivers build config 00:03:04.448 compress/zlib: not in enabled drivers build config 00:03:04.448 regex/mlx5: not in enabled drivers build config 00:03:04.448 regex/cn9k: not in enabled drivers build config 00:03:04.448 ml/cnxk: not in enabled drivers build config 00:03:04.448 vdpa/ifc: not in enabled drivers build config 00:03:04.448 vdpa/mlx5: not in enabled drivers build config 00:03:04.448 vdpa/nfp: not in enabled drivers build config 00:03:04.448 vdpa/sfc: not in enabled drivers build config 00:03:04.448 event/cnxk: not in enabled drivers build config 00:03:04.448 event/dlb2: not in enabled drivers build config 00:03:04.448 event/dpaa: not in enabled drivers build config 00:03:04.448 event/dpaa2: not in enabled drivers build config 00:03:04.448 event/dsw: not in enabled drivers build config 00:03:04.448 event/opdl: not in enabled drivers build config 00:03:04.448 event/skeleton: not in enabled drivers build config 00:03:04.448 event/sw: not in enabled drivers build config 00:03:04.448 event/octeontx: not in enabled drivers build config 00:03:04.448 baseband/acc: not in enabled drivers build config 00:03:04.448 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:04.448 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:04.448 baseband/la12xx: not in enabled drivers build config 00:03:04.448 baseband/null: not in enabled drivers build config 00:03:04.448 baseband/turbo_sw: not in enabled drivers build config 00:03:04.448 gpu/cuda: not in enabled drivers build config 00:03:04.448 00:03:04.448 00:03:04.448 Build targets in project: 215 00:03:04.448 00:03:04.448 DPDK 23.11.0 00:03:04.448 00:03:04.448 User defined options 00:03:04.448 libdir : lib 00:03:04.448 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:04.448 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:04.448 c_link_args : 00:03:04.448 enable_docs : false 00:03:04.448 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:04.448 enable_kmods : false 00:03:04.448 machine : native 00:03:04.448 tests : false 00:03:04.448 00:03:04.448 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:04.448 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:04.448 14:10:46 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:04.448 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:04.707 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:04.707 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:04.707 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:04.707 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:04.707 [5/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:04.707 [6/705] Linking static target lib/librte_kvargs.a 00:03:04.707 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:04.707 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:04.707 [9/705] Linking static target lib/librte_log.a 00:03:04.707 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:04.966 [11/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.966 [12/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:04.966 [13/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:04.966 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:04.966 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:04.966 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:04.966 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.225 [18/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:05.225 [19/705] Linking target lib/librte_log.so.24.0 00:03:05.225 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:05.225 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:05.225 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:05.225 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:05.225 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:05.484 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:05.484 [26/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:05.484 [27/705] Linking target lib/librte_kvargs.so.24.0 00:03:05.484 [28/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:05.484 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:05.484 [30/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:05.484 [31/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:05.484 [32/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:05.484 [33/705] Linking static target lib/librte_telemetry.a 00:03:05.484 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:05.484 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:05.484 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:05.484 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:05.743 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:05.743 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:05.743 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:05.743 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:05.743 [42/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:05.743 [43/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.743 [44/705] Linking target lib/librte_telemetry.so.24.0 00:03:05.743 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:06.001 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:06.001 [47/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:06.001 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:06.001 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:06.259 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:06.259 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:06.259 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:06.259 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:06.259 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:06.259 [55/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:06.259 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:06.259 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:06.259 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:06.259 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:06.259 [60/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:06.259 [61/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:06.259 [62/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:06.518 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:06.518 [64/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:06.518 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:06.518 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:06.518 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:06.518 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:06.518 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:06.518 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:06.776 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:06.776 [72/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:06.776 [73/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:06.776 [74/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:06.776 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:06.776 [76/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:06.776 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:06.776 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:07.035 [79/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:07.035 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:07.035 [81/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:07.035 [82/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:07.035 [83/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:07.035 [84/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:07.035 [85/705] Linking static target lib/librte_ring.a 00:03:07.035 [86/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.294 [87/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:07.294 [88/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:07.294 [89/705] Linking static target lib/librte_eal.a 00:03:07.294 [90/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:07.294 [91/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:07.294 [92/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:07.294 [93/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:07.553 [94/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:07.553 [95/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:07.553 [96/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:07.553 [97/705] Linking static target lib/librte_mempool.a 00:03:07.553 [98/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:07.553 [99/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:07.553 [100/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:07.553 [101/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:07.553 [102/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:07.553 [103/705] Linking static target lib/librte_rcu.a 00:03:07.812 [104/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:07.812 [105/705] Linking static target lib/librte_meter.a 00:03:07.812 [106/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:07.812 [107/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:07.812 [108/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:07.812 [109/705] Linking static target lib/librte_net.a 00:03:07.812 [110/705] Linking static target lib/librte_mbuf.a 00:03:07.812 [111/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.812 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:08.070 [113/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.070 [114/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:08.070 [115/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.070 [116/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.070 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:08.071 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.329 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:08.329 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:08.587 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:08.587 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:08.587 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:08.587 [124/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:08.587 [125/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:08.587 [126/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:08.587 [127/705] Linking static target lib/librte_pci.a 00:03:08.587 [128/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:08.845 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:08.845 [130/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.845 [131/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:08.845 [132/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:08.845 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:08.845 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:08.845 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:08.845 [136/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:08.845 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:08.845 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:08.845 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:09.104 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:09.104 [141/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:09.104 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:09.104 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:09.104 [144/705] Linking static target lib/librte_cmdline.a 00:03:09.104 [145/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:09.104 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:09.104 [147/705] Linking static target lib/librte_metrics.a 00:03:09.363 [148/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:09.363 [149/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:09.363 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.620 [151/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:09.620 [152/705] Linking static target lib/librte_timer.a 00:03:09.620 [153/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:09.620 [154/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.879 [155/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.879 [156/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:09.879 [157/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:09.879 [158/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:09.879 [159/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:10.137 [160/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:10.137 [161/705] Linking static target lib/librte_bitratestats.a 00:03:10.137 [162/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:10.395 [163/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.395 [164/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:10.395 [165/705] Linking static target lib/librte_bbdev.a 00:03:10.395 [166/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:10.653 [167/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:10.653 [168/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:10.653 [169/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:10.912 [170/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.912 [171/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:10.912 [172/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:10.912 [173/705] Linking static target lib/librte_hash.a 00:03:10.912 [174/705] Linking static target lib/acl/libavx2_tmp.a 00:03:10.912 [175/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:10.912 [176/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:10.912 [177/705] Linking static target lib/librte_ethdev.a 00:03:11.170 [178/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:11.170 [179/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:11.170 [180/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.428 [181/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:11.428 [182/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:11.428 [183/705] Linking static target lib/librte_cfgfile.a 00:03:11.428 [184/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:11.428 [185/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.428 [186/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:11.428 [187/705] Linking target lib/librte_eal.so.24.0 00:03:11.686 [188/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:11.686 [189/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:11.686 [190/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.686 [191/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:11.686 [192/705] Linking static target lib/librte_bpf.a 00:03:11.686 [193/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:11.686 [194/705] Linking target lib/librte_ring.so.24.0 00:03:11.686 [195/705] Linking target lib/librte_pci.so.24.0 00:03:11.686 [196/705] Linking target lib/librte_meter.so.24.0 00:03:11.686 [197/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:11.686 [198/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:11.686 [199/705] Linking target lib/librte_timer.so.24.0 00:03:11.686 [200/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:11.686 [201/705] Linking target lib/librte_rcu.so.24.0 00:03:11.686 [202/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:11.686 [203/705] Linking target lib/librte_mempool.so.24.0 00:03:11.686 [204/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:11.686 [205/705] Linking target lib/librte_cfgfile.so.24.0 00:03:11.686 [206/705] Linking static target lib/librte_compressdev.a 00:03:11.945 [207/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:11.945 [208/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:11.945 [209/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:11.945 [210/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:11.945 [211/705] Linking target lib/librte_mbuf.so.24.0 00:03:11.945 [212/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:11.945 [213/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.945 [214/705] Linking target lib/librte_net.so.24.0 00:03:11.945 [215/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:11.945 [216/705] Linking target lib/librte_bbdev.so.24.0 00:03:11.945 [217/705] Linking static target lib/librte_acl.a 00:03:12.203 [218/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:12.203 [219/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:12.203 [220/705] Linking target lib/librte_cmdline.so.24.0 00:03:12.203 [221/705] Linking target lib/librte_hash.so.24.0 00:03:12.203 [222/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:12.203 [223/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.203 [224/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:12.203 [225/705] Linking static target lib/librte_distributor.a 00:03:12.203 [226/705] Linking target lib/librte_compressdev.so.24.0 00:03:12.203 [227/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:12.203 [228/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.203 [229/705] Linking target lib/librte_acl.so.24.0 00:03:12.460 [230/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.460 [231/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:12.460 [232/705] Linking target lib/librte_distributor.so.24.0 00:03:12.460 [233/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:12.460 [234/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:12.460 [235/705] Linking static target lib/librte_dmadev.a 00:03:12.460 [236/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:12.718 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:12.718 [238/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.718 [239/705] Linking target lib/librte_dmadev.so.24.0 00:03:12.978 [240/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:12.978 [241/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:12.978 [242/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:12.978 [243/705] Linking static target lib/librte_efd.a 00:03:13.237 [244/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:13.237 [245/705] Linking static target lib/librte_cryptodev.a 00:03:13.237 [246/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:13.237 [247/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:13.237 [248/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.237 [249/705] Linking target lib/librte_efd.so.24.0 00:03:13.237 [250/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:13.237 [251/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:13.495 [252/705] Linking static target lib/librte_dispatcher.a 00:03:13.495 [253/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:13.495 [254/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:13.495 [255/705] Linking static target lib/librte_gpudev.a 00:03:13.495 [256/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:13.752 [257/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.752 [258/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:13.752 [259/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:14.009 [260/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:14.009 [261/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.009 [262/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:14.009 [263/705] Linking target lib/librte_cryptodev.so.24.0 00:03:14.009 [264/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:14.009 [265/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.009 [266/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:14.009 [267/705] Linking target lib/librte_gpudev.so.24.0 00:03:14.267 [268/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:14.267 [269/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:14.267 [270/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:14.267 [271/705] Linking static target lib/librte_gro.a 00:03:14.267 [272/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:14.267 [273/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:14.267 [274/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:14.267 [275/705] Linking static target lib/librte_eventdev.a 00:03:14.267 [276/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.267 [277/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:14.267 [278/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:14.267 [279/705] Linking static target lib/librte_gso.a 00:03:14.526 [280/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:14.526 [281/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.526 [282/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.526 [283/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:14.526 [284/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:14.526 [285/705] Linking target lib/librte_ethdev.so.24.0 00:03:14.526 [286/705] Linking static target lib/librte_jobstats.a 00:03:14.526 [287/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:14.526 [288/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:14.785 [289/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:14.785 [290/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:14.785 [291/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:14.785 [292/705] Linking target lib/librte_metrics.so.24.0 00:03:14.785 [293/705] Linking target lib/librte_bpf.so.24.0 00:03:14.785 [294/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:14.785 [295/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.785 [296/705] Linking target lib/librte_bitratestats.so.24.0 00:03:14.785 [297/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:14.785 [298/705] Linking static target lib/librte_ip_frag.a 00:03:14.785 [299/705] Linking target lib/librte_gro.so.24.0 00:03:14.785 [300/705] Linking target lib/librte_gso.so.24.0 00:03:14.785 [301/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:14.786 [302/705] Linking static target lib/librte_latencystats.a 00:03:15.045 [303/705] Linking target lib/librte_jobstats.so.24.0 00:03:15.045 [304/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:15.045 [305/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:15.045 [306/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:15.045 [307/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.045 [308/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.045 [309/705] Linking target lib/librte_ip_frag.so.24.0 00:03:15.045 [310/705] Linking target lib/librte_latencystats.so.24.0 00:03:15.304 [311/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:15.304 [312/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:15.304 [313/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:15.304 [314/705] Linking static target lib/librte_lpm.a 00:03:15.304 [315/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:15.304 [316/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:15.304 [317/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:15.304 [318/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:15.562 [319/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:15.562 [320/705] Linking static target lib/librte_pcapng.a 00:03:15.562 [321/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.562 [322/705] Linking target lib/librte_lpm.so.24.0 00:03:15.562 [323/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:15.562 [324/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:15.562 [325/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.562 [326/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:15.562 [327/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:15.562 [328/705] Linking target lib/librte_pcapng.so.24.0 00:03:15.562 [329/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:15.820 [330/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:15.820 [331/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.820 [332/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:15.820 [333/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:15.820 [334/705] Linking target lib/librte_eventdev.so.24.0 00:03:15.820 [335/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:15.820 [336/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:15.820 [337/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:16.077 [338/705] Linking target lib/librte_dispatcher.so.24.0 00:03:16.077 [339/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:16.077 [340/705] Linking static target lib/librte_power.a 00:03:16.077 [341/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:16.077 [342/705] Linking static target lib/librte_member.a 00:03:16.077 [343/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:16.077 [344/705] Linking static target lib/librte_regexdev.a 00:03:16.077 [345/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:16.077 [346/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:16.077 [347/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:16.077 [348/705] Linking static target lib/librte_rawdev.a 00:03:16.077 [349/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:16.337 [350/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:16.337 [351/705] Linking static target lib/librte_mldev.a 00:03:16.337 [352/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.338 [353/705] Linking target lib/librte_member.so.24.0 00:03:16.338 [354/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:16.338 [355/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:16.338 [356/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.595 [357/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.595 [358/705] Linking target lib/librte_rawdev.so.24.0 00:03:16.595 [359/705] Linking target lib/librte_power.so.24.0 00:03:16.595 [360/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:16.595 [361/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:16.595 [362/705] Linking static target lib/librte_rib.a 00:03:16.595 [363/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:16.595 [364/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:16.596 [365/705] Linking static target lib/librte_reorder.a 00:03:16.596 [366/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.596 [367/705] Linking target lib/librte_regexdev.so.24.0 00:03:16.596 [368/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:16.853 [369/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:16.853 [370/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:16.853 [371/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:16.853 [372/705] Linking static target lib/librte_stack.a 00:03:16.853 [373/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.853 [374/705] Linking target lib/librte_reorder.so.24.0 00:03:16.853 [375/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:16.853 [376/705] Linking static target lib/librte_security.a 00:03:16.853 [377/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.853 [378/705] Linking target lib/librte_rib.so.24.0 00:03:16.853 [379/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:16.853 [380/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.853 [381/705] Linking target lib/librte_stack.so.24.0 00:03:16.854 [382/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:17.111 [383/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:17.111 [384/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.111 [385/705] Linking target lib/librte_mldev.so.24.0 00:03:17.111 [386/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:17.111 [387/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.111 [388/705] Linking target lib/librte_security.so.24.0 00:03:17.368 [389/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:17.368 [390/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:17.368 [391/705] Linking static target lib/librte_sched.a 00:03:17.368 [392/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:17.368 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:17.626 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:17.626 [395/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.626 [396/705] Linking target lib/librte_sched.so.24.0 00:03:17.626 [397/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:17.626 [398/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:17.626 [399/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:17.884 [400/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:17.884 [401/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:17.884 [402/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:17.884 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:18.141 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:18.141 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:18.141 [406/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:18.410 [407/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:18.410 [408/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:18.410 [409/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:18.410 [410/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:18.410 [411/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:18.410 [412/705] Linking static target lib/librte_ipsec.a 00:03:18.410 [413/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:18.667 [414/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.667 [415/705] Linking target lib/librte_ipsec.so.24.0 00:03:18.667 [416/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:18.667 [417/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:18.667 [418/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:18.925 [419/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:18.925 [420/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:18.925 [421/705] Linking static target lib/librte_fib.a 00:03:18.925 [422/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:19.182 [423/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:19.182 [424/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.182 [425/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:19.182 [426/705] Linking target lib/librte_fib.so.24.0 00:03:19.182 [427/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:19.182 [428/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:19.439 [429/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:19.439 [430/705] Linking static target lib/librte_pdcp.a 00:03:19.439 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.439 [432/705] Linking target lib/librte_pdcp.so.24.0 00:03:19.698 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:19.698 [434/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:19.698 [435/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:19.698 [436/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:19.698 [437/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:19.698 [438/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:19.955 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:19.955 [440/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:19.955 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:20.213 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:20.213 [443/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:20.213 [444/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:20.213 [445/705] Linking static target lib/librte_port.a 00:03:20.213 [446/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:20.213 [447/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:20.213 [448/705] Linking static target lib/librte_pdump.a 00:03:20.213 [449/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:20.471 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:20.471 [451/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:20.471 [452/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.471 [453/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.471 [454/705] Linking target lib/librte_pdump.so.24.0 00:03:20.471 [455/705] Linking target lib/librte_port.so.24.0 00:03:20.729 [456/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:20.729 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:20.729 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:20.729 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:20.729 [460/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:20.729 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:20.988 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:20.988 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:20.988 [464/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:20.988 [465/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:20.988 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:20.988 [467/705] Linking static target lib/librte_table.a 00:03:21.247 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:21.247 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:21.513 [470/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:21.513 [471/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:21.513 [472/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.513 [473/705] Linking target lib/librte_table.so.24.0 00:03:21.513 [474/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:21.778 [475/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:21.779 [476/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:21.779 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:21.779 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:21.779 [479/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:22.036 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:22.036 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:22.036 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:22.294 [483/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:22.294 [484/705] Linking static target lib/librte_graph.a 00:03:22.294 [485/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:22.294 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:22.294 [487/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:22.294 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:22.553 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:22.553 [490/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.553 [491/705] Linking target lib/librte_graph.so.24.0 00:03:22.553 [492/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:22.553 [493/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:22.812 [494/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:22.812 [495/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:22.812 [496/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:22.812 [497/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:22.812 [498/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:23.070 [499/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:23.070 [500/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:23.070 [501/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:23.070 [502/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:23.070 [503/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:23.328 [504/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:23.328 [505/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:23.328 [506/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:23.328 [507/705] Linking static target lib/librte_node.a 00:03:23.328 [508/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:23.328 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:23.328 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:23.585 [511/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:23.585 [512/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:23.585 [513/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.585 [514/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:23.585 [515/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:23.585 [516/705] Linking target lib/librte_node.so.24.0 00:03:23.585 [517/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:23.585 [518/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:23.585 [519/705] Linking static target drivers/librte_bus_vdev.a 00:03:23.585 [520/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:23.585 [521/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:23.585 [522/705] Linking static target drivers/librte_bus_pci.a 00:03:23.842 [523/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:23.842 [524/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:23.842 [525/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:23.842 [526/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:23.842 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:23.842 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.842 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:03:24.101 [530/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:24.101 [531/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:24.101 [532/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:24.101 [533/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.101 [534/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:24.101 [535/705] Linking target drivers/librte_bus_pci.so.24.0 00:03:24.101 [536/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:24.101 [537/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:24.101 [538/705] Linking static target drivers/librte_mempool_ring.a 00:03:24.101 [539/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:24.101 [540/705] Linking target drivers/librte_mempool_ring.so.24.0 00:03:24.101 [541/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:24.359 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:24.359 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:24.617 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:24.617 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:24.875 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:25.134 [547/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:25.135 [548/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:25.135 [549/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:25.135 [550/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:25.394 [551/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:25.394 [552/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:25.394 [553/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:25.654 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:25.654 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:25.654 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:25.654 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:25.913 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:25.913 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:26.172 [560/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:26.172 [561/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:26.172 [562/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:26.433 [563/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:26.433 [564/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:26.433 [565/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:26.433 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:26.433 [567/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:26.692 [568/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:26.692 [569/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:26.692 [570/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:26.692 [571/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:26.692 [572/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:26.952 [573/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:26.952 [574/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:27.213 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:27.213 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:27.213 [577/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:27.213 [578/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:27.213 [579/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:27.213 [580/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:27.213 [581/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:27.473 [582/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:27.473 [583/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:27.473 [584/705] Linking static target drivers/librte_net_i40e.a 00:03:27.473 [585/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:27.473 [586/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:27.473 [587/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:27.731 [588/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:27.731 [589/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:27.990 [590/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:27.990 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:27.990 [592/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.990 [593/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:27.990 [594/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:28.250 [595/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:28.250 [596/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:28.250 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:28.250 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:28.571 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:28.571 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:28.571 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:28.571 [602/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:28.571 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:28.571 [604/705] Linking static target lib/librte_vhost.a 00:03:28.571 [605/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:28.571 [606/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:28.851 [607/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:28.851 [608/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:28.851 [609/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:28.851 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:28.851 [611/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:28.851 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:29.109 [613/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:29.109 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:29.368 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:29.368 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:29.368 [617/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.368 [618/705] Linking target lib/librte_vhost.so.24.0 00:03:29.625 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:29.883 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:29.883 [621/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:29.883 [622/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:29.883 [623/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:29.883 [624/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:29.883 [625/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:30.141 [626/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:30.141 [627/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:30.141 [628/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:30.141 [629/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:30.141 [630/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:30.141 [631/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:30.400 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:30.400 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:30.400 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:30.400 [635/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:30.400 [636/705] Linking static target lib/librte_pipeline.a 00:03:30.400 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:30.400 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:30.400 [639/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:30.658 [640/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:30.658 [641/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:30.658 [642/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:30.658 [643/705] Linking target app/dpdk-dumpcap 00:03:30.917 [644/705] Linking target app/dpdk-pdump 00:03:30.917 [645/705] Linking target app/dpdk-graph 00:03:30.917 [646/705] Linking target app/dpdk-proc-info 00:03:30.917 [647/705] Linking target app/dpdk-test-acl 00:03:30.917 [648/705] Linking target app/dpdk-test-cmdline 00:03:30.917 [649/705] Linking target app/dpdk-test-compress-perf 00:03:31.175 [650/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:31.175 [651/705] Linking target app/dpdk-test-crypto-perf 00:03:31.175 [652/705] Linking target app/dpdk-test-fib 00:03:31.175 [653/705] Linking target app/dpdk-test-dma-perf 00:03:31.175 [654/705] Linking target app/dpdk-test-flow-perf 00:03:31.175 [655/705] Linking target app/dpdk-test-gpudev 00:03:31.434 [656/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:31.434 [657/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:31.434 [658/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:31.434 [659/705] Linking target app/dpdk-test-eventdev 00:03:31.434 [660/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:31.434 [661/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:31.434 [662/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:31.693 [663/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:31.693 [664/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:31.693 [665/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:31.693 [666/705] Linking target app/dpdk-test-mldev 00:03:31.951 [667/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:31.951 [668/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:31.951 [669/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:32.210 [670/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:32.210 [671/705] Linking target app/dpdk-test-bbdev 00:03:32.210 [672/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:32.210 [673/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:32.210 [674/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:32.468 [675/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.468 [676/705] Linking target lib/librte_pipeline.so.24.0 00:03:32.468 [677/705] Linking target app/dpdk-test-pipeline 00:03:32.468 [678/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:32.469 [679/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:32.727 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:32.727 [681/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:32.727 [682/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:32.727 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:32.985 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:32.985 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:32.985 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:32.985 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:33.243 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:33.243 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:33.243 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:33.243 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:33.501 [692/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:33.758 [693/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:33.758 [694/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:33.758 [695/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:33.758 [696/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:34.017 [697/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:34.017 [698/705] Linking target app/dpdk-test-regex 00:03:34.018 [699/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:34.018 [700/705] Linking target app/dpdk-test-sad 00:03:34.018 [701/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:34.018 [702/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:34.277 [703/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:34.277 [704/705] Linking target app/dpdk-test-security-perf 00:03:34.844 [705/705] Linking target app/dpdk-testpmd 00:03:34.844 14:11:16 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:34.844 14:11:16 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:34.844 14:11:16 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:34.844 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:34.844 [0/1] Installing files. 00:03:34.844 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.844 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:34.845 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.106 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.106 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.106 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.106 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.106 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.106 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.106 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.106 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.106 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.106 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.106 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.106 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.106 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.110 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.110 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.110 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.110 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:35.110 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:35.110 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:35.110 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:35.110 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.110 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.372 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.372 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.372 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.372 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:35.372 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.372 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:35.372 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.372 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:35.372 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.372 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:35.372 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.372 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.373 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:35.374 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:35.374 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:35.374 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:35.374 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:35.374 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:35.374 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:35.374 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:35.374 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:35.374 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:35.374 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:35.374 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:35.374 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:35.374 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:35.374 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:35.374 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:35.374 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:35.374 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:35.374 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:35.374 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:35.374 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:35.374 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:35.374 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:35.374 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:35.374 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:35.374 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:35.374 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:35.374 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:35.374 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:35.374 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:35.374 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:35.374 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:35.374 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:35.374 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:35.374 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:35.374 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:35.374 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:35.374 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:35.374 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:35.374 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:35.374 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:35.374 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:35.374 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:35.374 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:35.374 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:35.374 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:35.374 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:35.374 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:35.374 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:35.374 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:35.374 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:35.374 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:35.375 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:35.375 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:35.375 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:35.375 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:35.375 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:35.375 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:35.375 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:35.375 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:35.375 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:35.375 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:35.375 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:35.375 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:35.375 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:35.375 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:35.375 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:35.375 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:35.375 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:35.375 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:35.375 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:35.375 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:35.375 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:35.375 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:35.375 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:35.375 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:35.375 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:35.375 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:35.375 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:35.375 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:35.375 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:35.375 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:35.375 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:35.375 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:35.375 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:35.375 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:35.375 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:35.375 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:35.375 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:35.375 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:35.375 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:35.375 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:35.375 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:35.375 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:35.375 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:35.375 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:35.375 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:35.375 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:35.375 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:35.375 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:35.375 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:35.375 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:35.375 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:35.375 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:35.375 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:35.375 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:35.375 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:35.375 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:35.375 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:35.375 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:35.375 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:35.375 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:35.375 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:35.375 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:35.375 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:35.375 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:35.375 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:35.375 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:35.375 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:35.375 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:35.375 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:35.375 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:35.375 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:35.375 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:35.375 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:35.375 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:35.375 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:35.375 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:35.375 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:35.375 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:35.375 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:35.375 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:35.375 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:35.375 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:35.375 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:35.375 14:11:17 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:35.375 14:11:17 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:35.375 00:03:35.375 real 0m36.185s 00:03:35.375 user 4m14.521s 00:03:35.375 sys 0m37.037s 00:03:35.375 14:11:17 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:35.375 ************************************ 00:03:35.375 END TEST build_native_dpdk 00:03:35.375 ************************************ 00:03:35.375 14:11:17 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:35.375 14:11:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:35.375 14:11:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:35.375 14:11:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:35.375 14:11:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:35.375 14:11:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:35.375 14:11:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:35.375 14:11:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:35.375 14:11:17 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:35.633 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:35.633 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.633 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:35.633 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:36.200 Using 'verbs' RDMA provider 00:03:47.143 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:57.115 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:57.115 Creating mk/config.mk...done. 00:03:57.115 Creating mk/cc.flags.mk...done. 00:03:57.115 Type 'make' to build. 00:03:57.115 14:11:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:57.115 14:11:38 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:57.115 14:11:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:57.115 14:11:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:57.115 ************************************ 00:03:57.115 START TEST make 00:03:57.115 ************************************ 00:03:57.115 14:11:38 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:57.115 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:57.115 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:57.115 meson setup builddir \ 00:03:57.115 -Dwith-libaio=enabled \ 00:03:57.115 -Dwith-liburing=enabled \ 00:03:57.115 -Dwith-libvfn=disabled \ 00:03:57.115 -Dwith-spdk=false && \ 00:03:57.115 meson compile -C builddir && \ 00:03:57.115 cd -) 00:03:57.115 make[1]: Nothing to be done for 'all'. 00:03:59.017 The Meson build system 00:03:59.017 Version: 1.5.0 00:03:59.017 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:59.017 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:59.017 Build type: native build 00:03:59.017 Project name: xnvme 00:03:59.017 Project version: 0.7.3 00:03:59.017 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:59.017 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:59.017 Host machine cpu family: x86_64 00:03:59.017 Host machine cpu: x86_64 00:03:59.017 Message: host_machine.system: linux 00:03:59.017 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:59.017 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:59.017 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:59.017 Run-time dependency threads found: YES 00:03:59.017 Has header "setupapi.h" : NO 00:03:59.017 Has header "linux/blkzoned.h" : YES 00:03:59.017 Has header "linux/blkzoned.h" : YES (cached) 00:03:59.017 Has header "libaio.h" : YES 00:03:59.017 Library aio found: YES 00:03:59.017 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:59.017 Run-time dependency liburing found: YES 2.2 00:03:59.017 Dependency libvfn skipped: feature with-libvfn disabled 00:03:59.017 Run-time dependency appleframeworks found: NO (tried framework) 00:03:59.017 Run-time dependency appleframeworks found: NO (tried framework) 00:03:59.017 Configuring xnvme_config.h using configuration 00:03:59.017 Configuring xnvme.spec using configuration 00:03:59.017 Run-time dependency bash-completion found: YES 2.11 00:03:59.017 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:59.017 Program cp found: YES (/usr/bin/cp) 00:03:59.017 Has header "winsock2.h" : NO 00:03:59.017 Has header "dbghelp.h" : NO 00:03:59.017 Library rpcrt4 found: NO 00:03:59.017 Library rt found: YES 00:03:59.017 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:59.017 Found CMake: /usr/bin/cmake (3.27.7) 00:03:59.017 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:59.017 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:59.017 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:59.017 Build targets in project: 32 00:03:59.017 00:03:59.017 xnvme 0.7.3 00:03:59.017 00:03:59.017 User defined options 00:03:59.017 with-libaio : enabled 00:03:59.017 with-liburing: enabled 00:03:59.017 with-libvfn : disabled 00:03:59.017 with-spdk : false 00:03:59.017 00:03:59.017 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:59.017 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:59.017 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:59.017 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:59.017 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:59.017 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:59.276 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:59.276 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:59.276 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:59.276 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:59.276 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:59.276 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:59.276 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:59.276 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:59.276 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:59.276 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:59.276 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:59.276 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:59.276 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:59.276 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:59.276 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:59.276 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:59.276 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:59.276 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:59.276 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:59.276 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:59.277 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:59.277 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:59.277 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:59.277 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:59.277 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:59.535 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:59.535 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:59.535 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:59.535 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:59.535 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:59.535 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:59.535 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:59.535 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:59.535 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:59.535 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:59.535 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:59.535 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:59.535 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:59.535 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:59.535 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:59.535 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:59.535 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:59.535 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:59.535 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:59.535 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:59.535 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:59.535 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:59.535 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:59.536 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:59.536 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:59.536 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:59.536 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:59.536 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:59.536 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:59.536 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:59.536 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:59.536 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:59.536 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:59.536 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:59.794 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:59.794 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:59.794 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:59.794 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:59.794 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:59.794 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:59.794 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:59.794 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:59.794 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:59.794 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:59.794 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:59.794 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:59.794 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:59.794 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:59.794 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:59.794 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:59.794 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:59.794 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:59.794 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:59.794 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:04:00.052 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:04:00.052 [85/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:04:00.052 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:04:00.052 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:04:00.052 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:04:00.052 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:04:00.052 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:04:00.052 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:04:00.052 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:04:00.052 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:04:00.052 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:04:00.052 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:04:00.052 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:04:00.052 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:04:00.052 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:04:00.052 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:04:00.052 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:04:00.052 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:04:00.052 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:04:00.052 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:04:00.052 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:04:00.052 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:04:00.052 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:04:00.052 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:04:00.052 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:04:00.052 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:04:00.052 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:04:00.052 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:04:00.052 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:04:00.052 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:04:00.052 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:04:00.052 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:04:00.052 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:04:00.052 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:04:00.052 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:04:00.052 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:04:00.052 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:04:00.052 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:04:00.311 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:04:00.311 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:04:00.311 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:04:00.311 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:04:00.311 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:04:00.311 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:04:00.311 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:04:00.311 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:04:00.311 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:04:00.311 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:04:00.311 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:04:00.311 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:04:00.311 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:04:00.311 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:04:00.311 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:04:00.312 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:04:00.312 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:04:00.312 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:04:00.312 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:04:00.312 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:04:00.312 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:04:00.571 [143/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:04:00.571 [144/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:04:00.571 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:04:00.571 [146/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:04:00.571 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:04:00.571 [148/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:04:00.571 [149/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:04:00.571 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:04:00.571 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:04:00.571 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:04:00.571 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:04:00.571 [154/203] Linking target lib/libxnvme.so 00:04:00.571 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:04:00.571 [156/203] Compiling C object tools/xdd.p/xdd.c.o 00:04:00.571 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:04:00.571 [158/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:04:00.571 [159/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:04:00.829 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:04:00.829 [161/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:04:00.829 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:04:00.829 [163/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:04:00.829 [164/203] Compiling C object tools/lblk.p/lblk.c.o 00:04:00.829 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:04:00.829 [166/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:04:00.829 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:04:00.829 [168/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:04:00.829 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:04:00.829 [170/203] Linking static target lib/libxnvme.a 00:04:00.829 [171/203] Compiling C object tools/zoned.p/zoned.c.o 00:04:00.829 [172/203] Linking target tests/xnvme_tests_scc 00:04:00.829 [173/203] Linking target tests/xnvme_tests_async_intf 00:04:00.829 [174/203] Linking target tests/xnvme_tests_enum 00:04:00.829 [175/203] Linking target tests/xnvme_tests_lblk 00:04:00.829 [176/203] Linking target tests/xnvme_tests_xnvme_file 00:04:00.829 [177/203] Linking target tests/xnvme_tests_znd_append 00:04:00.829 [178/203] Linking target tests/xnvme_tests_buf 00:04:00.829 [179/203] Linking target tests/xnvme_tests_cli 00:04:00.829 [180/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:04:00.829 [181/203] Linking target tests/xnvme_tests_ioworker 00:04:00.829 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:04:00.829 [183/203] Linking target tests/xnvme_tests_kvs 00:04:00.829 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:04:00.829 [185/203] Linking target tests/xnvme_tests_znd_zrwa 00:04:00.829 [186/203] Linking target tests/xnvme_tests_znd_state 00:04:00.829 [187/203] Linking target tools/zoned 00:04:00.829 [188/203] Linking target tools/lblk 00:04:00.829 [189/203] Linking target tests/xnvme_tests_map 00:04:00.829 [190/203] Linking target examples/xnvme_dev 00:04:00.829 [191/203] Linking target tools/xdd 00:04:00.829 [192/203] Linking target tools/kvs 00:04:00.829 [193/203] Linking target examples/xnvme_enum 00:04:01.087 [194/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:04:01.087 [195/203] Linking target examples/xnvme_io_async 00:04:01.087 [196/203] Linking target examples/xnvme_single_async 00:04:01.087 [197/203] Linking target examples/xnvme_hello 00:04:01.087 [198/203] Linking target examples/zoned_io_sync 00:04:01.087 [199/203] Linking target examples/zoned_io_async 00:04:01.087 [200/203] Linking target examples/xnvme_single_sync 00:04:01.087 [201/203] Linking target tools/xnvme_file 00:04:01.087 [202/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:04:01.087 [203/203] Linking target tools/xnvme 00:04:01.087 INFO: autodetecting backend as ninja 00:04:01.087 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:01.087 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:33.205 CC lib/log/log.o 00:04:33.205 CC lib/log/log_flags.o 00:04:33.205 CC lib/log/log_deprecated.o 00:04:33.205 CC lib/ut/ut.o 00:04:33.205 CC lib/ut_mock/mock.o 00:04:33.205 LIB libspdk_log.a 00:04:33.205 LIB libspdk_ut.a 00:04:33.205 SO libspdk_log.so.7.0 00:04:33.205 SO libspdk_ut.so.2.0 00:04:33.205 LIB libspdk_ut_mock.a 00:04:33.205 SO libspdk_ut_mock.so.6.0 00:04:33.205 SYMLINK libspdk_log.so 00:04:33.205 SYMLINK libspdk_ut.so 00:04:33.205 SYMLINK libspdk_ut_mock.so 00:04:33.205 CC lib/ioat/ioat.o 00:04:33.205 CC lib/dma/dma.o 00:04:33.205 CC lib/util/base64.o 00:04:33.205 CC lib/util/crc16.o 00:04:33.205 CC lib/util/bit_array.o 00:04:33.205 CC lib/util/crc32.o 00:04:33.205 CC lib/util/cpuset.o 00:04:33.205 CC lib/util/crc32c.o 00:04:33.205 CXX lib/trace_parser/trace.o 00:04:33.205 CC lib/util/crc32_ieee.o 00:04:33.205 CC lib/vfio_user/host/vfio_user_pci.o 00:04:33.205 CC lib/vfio_user/host/vfio_user.o 00:04:33.205 CC lib/util/crc64.o 00:04:33.205 CC lib/util/dif.o 00:04:33.205 LIB libspdk_dma.a 00:04:33.205 CC lib/util/fd.o 00:04:33.205 CC lib/util/fd_group.o 00:04:33.205 SO libspdk_dma.so.5.0 00:04:33.205 CC lib/util/file.o 00:04:33.205 CC lib/util/hexlify.o 00:04:33.205 SYMLINK libspdk_dma.so 00:04:33.205 LIB libspdk_ioat.a 00:04:33.205 CC lib/util/iov.o 00:04:33.205 SO libspdk_ioat.so.7.0 00:04:33.205 CC lib/util/math.o 00:04:33.205 CC lib/util/net.o 00:04:33.205 SYMLINK libspdk_ioat.so 00:04:33.205 CC lib/util/pipe.o 00:04:33.205 CC lib/util/strerror_tls.o 00:04:33.205 LIB libspdk_vfio_user.a 00:04:33.205 CC lib/util/string.o 00:04:33.205 SO libspdk_vfio_user.so.5.0 00:04:33.205 CC lib/util/uuid.o 00:04:33.205 SYMLINK libspdk_vfio_user.so 00:04:33.205 CC lib/util/xor.o 00:04:33.205 CC lib/util/zipf.o 00:04:33.205 CC lib/util/md5.o 00:04:33.205 LIB libspdk_util.a 00:04:33.205 SO libspdk_util.so.10.0 00:04:33.205 LIB libspdk_trace_parser.a 00:04:33.205 SO libspdk_trace_parser.so.6.0 00:04:33.205 SYMLINK libspdk_util.so 00:04:33.205 SYMLINK libspdk_trace_parser.so 00:04:33.205 CC lib/conf/conf.o 00:04:33.205 CC lib/rdma_provider/common.o 00:04:33.205 CC lib/idxd/idxd.o 00:04:33.205 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:33.205 CC lib/idxd/idxd_user.o 00:04:33.205 CC lib/rdma_utils/rdma_utils.o 00:04:33.205 CC lib/vmd/vmd.o 00:04:33.205 CC lib/idxd/idxd_kernel.o 00:04:33.205 CC lib/env_dpdk/env.o 00:04:33.205 CC lib/json/json_parse.o 00:04:33.205 CC lib/json/json_util.o 00:04:33.205 LIB libspdk_conf.a 00:04:33.205 CC lib/json/json_write.o 00:04:33.205 LIB libspdk_rdma_provider.a 00:04:33.205 SO libspdk_conf.so.6.0 00:04:33.205 SO libspdk_rdma_provider.so.6.0 00:04:33.205 SYMLINK libspdk_conf.so 00:04:33.205 CC lib/env_dpdk/memory.o 00:04:33.205 CC lib/vmd/led.o 00:04:33.205 CC lib/env_dpdk/pci.o 00:04:33.205 SYMLINK libspdk_rdma_provider.so 00:04:33.205 CC lib/env_dpdk/init.o 00:04:33.205 LIB libspdk_rdma_utils.a 00:04:33.205 SO libspdk_rdma_utils.so.1.0 00:04:33.205 SYMLINK libspdk_rdma_utils.so 00:04:33.205 CC lib/env_dpdk/threads.o 00:04:33.205 CC lib/env_dpdk/pci_ioat.o 00:04:33.205 CC lib/env_dpdk/pci_virtio.o 00:04:33.205 LIB libspdk_json.a 00:04:33.205 SO libspdk_json.so.6.0 00:04:33.205 CC lib/env_dpdk/pci_vmd.o 00:04:33.205 CC lib/env_dpdk/pci_idxd.o 00:04:33.205 CC lib/env_dpdk/pci_event.o 00:04:33.205 SYMLINK libspdk_json.so 00:04:33.205 CC lib/env_dpdk/sigbus_handler.o 00:04:33.205 CC lib/env_dpdk/pci_dpdk.o 00:04:33.205 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:33.205 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:33.205 LIB libspdk_idxd.a 00:04:33.205 CC lib/jsonrpc/jsonrpc_server.o 00:04:33.205 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:33.205 LIB libspdk_vmd.a 00:04:33.205 SO libspdk_idxd.so.12.1 00:04:33.205 SO libspdk_vmd.so.6.0 00:04:33.205 CC lib/jsonrpc/jsonrpc_client.o 00:04:33.205 SYMLINK libspdk_idxd.so 00:04:33.205 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:33.205 SYMLINK libspdk_vmd.so 00:04:33.464 LIB libspdk_jsonrpc.a 00:04:33.464 SO libspdk_jsonrpc.so.6.0 00:04:33.464 SYMLINK libspdk_jsonrpc.so 00:04:33.722 CC lib/rpc/rpc.o 00:04:33.982 LIB libspdk_env_dpdk.a 00:04:33.982 LIB libspdk_rpc.a 00:04:33.982 SO libspdk_rpc.so.6.0 00:04:33.982 SO libspdk_env_dpdk.so.15.0 00:04:33.982 SYMLINK libspdk_rpc.so 00:04:33.982 SYMLINK libspdk_env_dpdk.so 00:04:34.242 CC lib/notify/notify.o 00:04:34.242 CC lib/notify/notify_rpc.o 00:04:34.242 CC lib/trace/trace.o 00:04:34.242 CC lib/trace/trace_flags.o 00:04:34.242 CC lib/trace/trace_rpc.o 00:04:34.242 CC lib/keyring/keyring.o 00:04:34.242 CC lib/keyring/keyring_rpc.o 00:04:34.242 LIB libspdk_notify.a 00:04:34.242 SO libspdk_notify.so.6.0 00:04:34.242 SYMLINK libspdk_notify.so 00:04:34.242 LIB libspdk_keyring.a 00:04:34.503 LIB libspdk_trace.a 00:04:34.503 SO libspdk_keyring.so.2.0 00:04:34.503 SO libspdk_trace.so.11.0 00:04:34.503 SYMLINK libspdk_keyring.so 00:04:34.503 SYMLINK libspdk_trace.so 00:04:34.765 CC lib/thread/iobuf.o 00:04:34.765 CC lib/thread/thread.o 00:04:34.765 CC lib/sock/sock_rpc.o 00:04:34.765 CC lib/sock/sock.o 00:04:35.027 LIB libspdk_sock.a 00:04:35.027 SO libspdk_sock.so.10.0 00:04:35.288 SYMLINK libspdk_sock.so 00:04:35.548 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:35.548 CC lib/nvme/nvme_ctrlr.o 00:04:35.548 CC lib/nvme/nvme_fabric.o 00:04:35.548 CC lib/nvme/nvme_ns_cmd.o 00:04:35.548 CC lib/nvme/nvme_ns.o 00:04:35.548 CC lib/nvme/nvme_pcie_common.o 00:04:35.548 CC lib/nvme/nvme_pcie.o 00:04:35.548 CC lib/nvme/nvme_qpair.o 00:04:35.548 CC lib/nvme/nvme.o 00:04:35.810 CC lib/nvme/nvme_quirks.o 00:04:36.072 CC lib/nvme/nvme_transport.o 00:04:36.072 CC lib/nvme/nvme_discovery.o 00:04:36.072 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:36.072 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:36.072 LIB libspdk_thread.a 00:04:36.072 CC lib/nvme/nvme_tcp.o 00:04:36.333 CC lib/nvme/nvme_opal.o 00:04:36.333 CC lib/nvme/nvme_io_msg.o 00:04:36.333 SO libspdk_thread.so.10.1 00:04:36.333 CC lib/nvme/nvme_poll_group.o 00:04:36.333 SYMLINK libspdk_thread.so 00:04:36.333 CC lib/nvme/nvme_zns.o 00:04:36.595 CC lib/nvme/nvme_stubs.o 00:04:36.595 CC lib/nvme/nvme_auth.o 00:04:36.595 CC lib/nvme/nvme_cuse.o 00:04:36.595 CC lib/nvme/nvme_rdma.o 00:04:36.856 CC lib/blob/blobstore.o 00:04:36.856 CC lib/accel/accel.o 00:04:36.856 CC lib/init/json_config.o 00:04:37.116 CC lib/virtio/virtio.o 00:04:37.117 CC lib/fsdev/fsdev.o 00:04:37.117 CC lib/init/subsystem.o 00:04:37.378 CC lib/virtio/virtio_vhost_user.o 00:04:37.378 CC lib/init/subsystem_rpc.o 00:04:37.378 CC lib/fsdev/fsdev_io.o 00:04:37.378 CC lib/init/rpc.o 00:04:37.378 CC lib/fsdev/fsdev_rpc.o 00:04:37.378 CC lib/virtio/virtio_vfio_user.o 00:04:37.640 CC lib/virtio/virtio_pci.o 00:04:37.640 LIB libspdk_init.a 00:04:37.640 CC lib/blob/request.o 00:04:37.640 SO libspdk_init.so.6.0 00:04:37.640 CC lib/accel/accel_rpc.o 00:04:37.640 SYMLINK libspdk_init.so 00:04:37.640 CC lib/blob/zeroes.o 00:04:37.640 CC lib/accel/accel_sw.o 00:04:37.640 LIB libspdk_nvme.a 00:04:37.640 LIB libspdk_fsdev.a 00:04:37.901 CC lib/blob/blob_bs_dev.o 00:04:37.901 SO libspdk_fsdev.so.1.0 00:04:37.901 CC lib/event/app.o 00:04:37.901 SO libspdk_nvme.so.14.0 00:04:37.901 LIB libspdk_virtio.a 00:04:37.901 SYMLINK libspdk_fsdev.so 00:04:37.901 CC lib/event/reactor.o 00:04:37.901 CC lib/event/log_rpc.o 00:04:37.902 SO libspdk_virtio.so.7.0 00:04:37.902 SYMLINK libspdk_virtio.so 00:04:37.902 CC lib/event/app_rpc.o 00:04:37.902 CC lib/event/scheduler_static.o 00:04:37.902 SYMLINK libspdk_nvme.so 00:04:38.181 LIB libspdk_accel.a 00:04:38.181 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:38.181 SO libspdk_accel.so.16.0 00:04:38.181 SYMLINK libspdk_accel.so 00:04:38.181 LIB libspdk_event.a 00:04:38.181 SO libspdk_event.so.14.0 00:04:38.444 CC lib/bdev/bdev.o 00:04:38.444 CC lib/bdev/part.o 00:04:38.444 CC lib/bdev/bdev_rpc.o 00:04:38.444 CC lib/bdev/bdev_zone.o 00:04:38.444 CC lib/bdev/scsi_nvme.o 00:04:38.444 SYMLINK libspdk_event.so 00:04:38.705 LIB libspdk_fuse_dispatcher.a 00:04:38.705 SO libspdk_fuse_dispatcher.so.1.0 00:04:38.705 SYMLINK libspdk_fuse_dispatcher.so 00:04:39.649 LIB libspdk_blob.a 00:04:39.649 SO libspdk_blob.so.11.0 00:04:39.649 SYMLINK libspdk_blob.so 00:04:39.910 CC lib/blobfs/tree.o 00:04:39.910 CC lib/blobfs/blobfs.o 00:04:39.910 CC lib/lvol/lvol.o 00:04:40.853 LIB libspdk_blobfs.a 00:04:40.853 SO libspdk_blobfs.so.10.0 00:04:40.853 SYMLINK libspdk_blobfs.so 00:04:40.853 LIB libspdk_lvol.a 00:04:40.853 SO libspdk_lvol.so.10.0 00:04:40.853 SYMLINK libspdk_lvol.so 00:04:41.114 LIB libspdk_bdev.a 00:04:41.114 SO libspdk_bdev.so.16.0 00:04:41.375 SYMLINK libspdk_bdev.so 00:04:41.375 CC lib/scsi/dev.o 00:04:41.375 CC lib/ublk/ublk.o 00:04:41.375 CC lib/scsi/port.o 00:04:41.375 CC lib/ublk/ublk_rpc.o 00:04:41.375 CC lib/scsi/scsi.o 00:04:41.375 CC lib/scsi/lun.o 00:04:41.375 CC lib/scsi/scsi_bdev.o 00:04:41.375 CC lib/nvmf/ctrlr.o 00:04:41.375 CC lib/nbd/nbd.o 00:04:41.375 CC lib/ftl/ftl_core.o 00:04:41.636 CC lib/ftl/ftl_init.o 00:04:41.636 CC lib/ftl/ftl_layout.o 00:04:41.636 CC lib/ftl/ftl_debug.o 00:04:41.636 CC lib/ftl/ftl_io.o 00:04:41.636 CC lib/ftl/ftl_sb.o 00:04:41.898 CC lib/scsi/scsi_pr.o 00:04:41.898 CC lib/ftl/ftl_l2p.o 00:04:41.898 CC lib/nbd/nbd_rpc.o 00:04:41.898 CC lib/scsi/scsi_rpc.o 00:04:41.898 CC lib/ftl/ftl_l2p_flat.o 00:04:41.898 CC lib/scsi/task.o 00:04:41.898 CC lib/ftl/ftl_nv_cache.o 00:04:41.898 CC lib/ftl/ftl_band.o 00:04:41.898 LIB libspdk_nbd.a 00:04:41.898 CC lib/ftl/ftl_band_ops.o 00:04:41.898 SO libspdk_nbd.so.7.0 00:04:41.898 CC lib/nvmf/ctrlr_discovery.o 00:04:41.898 CC lib/nvmf/ctrlr_bdev.o 00:04:42.160 CC lib/ftl/ftl_writer.o 00:04:42.160 SYMLINK libspdk_nbd.so 00:04:42.160 CC lib/nvmf/subsystem.o 00:04:42.160 LIB libspdk_scsi.a 00:04:42.160 LIB libspdk_ublk.a 00:04:42.160 SO libspdk_scsi.so.9.0 00:04:42.160 SO libspdk_ublk.so.3.0 00:04:42.160 SYMLINK libspdk_ublk.so 00:04:42.160 SYMLINK libspdk_scsi.so 00:04:42.160 CC lib/nvmf/nvmf.o 00:04:42.160 CC lib/nvmf/nvmf_rpc.o 00:04:42.160 CC lib/ftl/ftl_rq.o 00:04:42.421 CC lib/iscsi/conn.o 00:04:42.421 CC lib/vhost/vhost.o 00:04:42.421 CC lib/vhost/vhost_rpc.o 00:04:42.421 CC lib/vhost/vhost_scsi.o 00:04:42.681 CC lib/vhost/vhost_blk.o 00:04:42.681 CC lib/iscsi/init_grp.o 00:04:42.681 CC lib/iscsi/iscsi.o 00:04:42.942 CC lib/iscsi/param.o 00:04:42.942 CC lib/ftl/ftl_reloc.o 00:04:42.942 CC lib/vhost/rte_vhost_user.o 00:04:42.942 CC lib/iscsi/portal_grp.o 00:04:43.203 CC lib/nvmf/transport.o 00:04:43.203 CC lib/iscsi/tgt_node.o 00:04:43.203 CC lib/iscsi/iscsi_subsystem.o 00:04:43.203 CC lib/nvmf/tcp.o 00:04:43.203 CC lib/ftl/ftl_l2p_cache.o 00:04:43.203 CC lib/nvmf/stubs.o 00:04:43.462 CC lib/iscsi/iscsi_rpc.o 00:04:43.462 CC lib/iscsi/task.o 00:04:43.462 CC lib/nvmf/mdns_server.o 00:04:43.720 CC lib/nvmf/rdma.o 00:04:43.720 CC lib/ftl/ftl_p2l.o 00:04:43.720 CC lib/nvmf/auth.o 00:04:43.720 CC lib/ftl/ftl_p2l_log.o 00:04:43.720 CC lib/ftl/mngt/ftl_mngt.o 00:04:43.720 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:43.977 LIB libspdk_vhost.a 00:04:43.977 LIB libspdk_iscsi.a 00:04:43.977 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:43.977 SO libspdk_vhost.so.8.0 00:04:43.977 SO libspdk_iscsi.so.8.0 00:04:43.977 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:43.977 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:43.977 SYMLINK libspdk_vhost.so 00:04:43.977 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:43.977 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:43.977 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:43.977 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:43.977 SYMLINK libspdk_iscsi.so 00:04:43.977 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:44.261 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:44.261 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:44.261 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:44.261 CC lib/ftl/utils/ftl_conf.o 00:04:44.261 CC lib/ftl/utils/ftl_md.o 00:04:44.261 CC lib/ftl/utils/ftl_mempool.o 00:04:44.261 CC lib/ftl/utils/ftl_bitmap.o 00:04:44.261 CC lib/ftl/utils/ftl_property.o 00:04:44.261 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:44.518 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:44.518 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:44.518 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:44.518 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:44.518 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:44.518 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:44.518 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:44.518 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:44.518 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:44.518 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:44.518 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:44.518 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:44.518 CC lib/ftl/base/ftl_base_dev.o 00:04:44.775 CC lib/ftl/base/ftl_base_bdev.o 00:04:44.775 CC lib/ftl/ftl_trace.o 00:04:44.775 LIB libspdk_ftl.a 00:04:45.031 SO libspdk_ftl.so.9.0 00:04:45.288 SYMLINK libspdk_ftl.so 00:04:45.288 LIB libspdk_nvmf.a 00:04:45.546 SO libspdk_nvmf.so.19.0 00:04:45.803 SYMLINK libspdk_nvmf.so 00:04:46.062 CC module/env_dpdk/env_dpdk_rpc.o 00:04:46.062 CC module/accel/dsa/accel_dsa.o 00:04:46.062 CC module/accel/iaa/accel_iaa.o 00:04:46.062 CC module/blob/bdev/blob_bdev.o 00:04:46.062 CC module/accel/error/accel_error.o 00:04:46.062 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:46.062 CC module/keyring/file/keyring.o 00:04:46.062 CC module/accel/ioat/accel_ioat.o 00:04:46.062 CC module/fsdev/aio/fsdev_aio.o 00:04:46.062 CC module/sock/posix/posix.o 00:04:46.062 LIB libspdk_env_dpdk_rpc.a 00:04:46.062 SO libspdk_env_dpdk_rpc.so.6.0 00:04:46.062 SYMLINK libspdk_env_dpdk_rpc.so 00:04:46.062 CC module/accel/dsa/accel_dsa_rpc.o 00:04:46.062 CC module/accel/error/accel_error_rpc.o 00:04:46.062 CC module/keyring/file/keyring_rpc.o 00:04:46.062 LIB libspdk_scheduler_dynamic.a 00:04:46.062 SO libspdk_scheduler_dynamic.so.4.0 00:04:46.062 CC module/accel/iaa/accel_iaa_rpc.o 00:04:46.062 CC module/accel/ioat/accel_ioat_rpc.o 00:04:46.320 LIB libspdk_blob_bdev.a 00:04:46.320 SYMLINK libspdk_scheduler_dynamic.so 00:04:46.320 SO libspdk_blob_bdev.so.11.0 00:04:46.320 LIB libspdk_accel_error.a 00:04:46.320 SO libspdk_accel_error.so.2.0 00:04:46.320 SYMLINK libspdk_blob_bdev.so 00:04:46.320 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:46.320 CC module/fsdev/aio/linux_aio_mgr.o 00:04:46.320 LIB libspdk_keyring_file.a 00:04:46.320 LIB libspdk_accel_iaa.a 00:04:46.320 LIB libspdk_accel_dsa.a 00:04:46.320 SO libspdk_keyring_file.so.2.0 00:04:46.320 LIB libspdk_accel_ioat.a 00:04:46.320 SO libspdk_accel_iaa.so.3.0 00:04:46.320 SO libspdk_accel_dsa.so.5.0 00:04:46.320 SYMLINK libspdk_accel_error.so 00:04:46.320 SO libspdk_accel_ioat.so.6.0 00:04:46.320 SYMLINK libspdk_keyring_file.so 00:04:46.320 SYMLINK libspdk_accel_iaa.so 00:04:46.320 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:46.320 SYMLINK libspdk_accel_dsa.so 00:04:46.320 SYMLINK libspdk_accel_ioat.so 00:04:46.577 CC module/keyring/linux/keyring.o 00:04:46.577 CC module/scheduler/gscheduler/gscheduler.o 00:04:46.577 LIB libspdk_scheduler_dpdk_governor.a 00:04:46.577 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:46.577 CC module/bdev/delay/vbdev_delay.o 00:04:46.577 CC module/bdev/gpt/gpt.o 00:04:46.577 CC module/blobfs/bdev/blobfs_bdev.o 00:04:46.577 CC module/bdev/error/vbdev_error.o 00:04:46.577 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:46.577 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:46.577 LIB libspdk_scheduler_gscheduler.a 00:04:46.577 CC module/keyring/linux/keyring_rpc.o 00:04:46.577 CC module/bdev/lvol/vbdev_lvol.o 00:04:46.577 SO libspdk_scheduler_gscheduler.so.4.0 00:04:46.577 SYMLINK libspdk_scheduler_gscheduler.so 00:04:46.577 CC module/bdev/error/vbdev_error_rpc.o 00:04:46.577 LIB libspdk_fsdev_aio.a 00:04:46.577 CC module/bdev/gpt/vbdev_gpt.o 00:04:46.577 SO libspdk_fsdev_aio.so.1.0 00:04:46.577 LIB libspdk_keyring_linux.a 00:04:46.834 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:46.834 SO libspdk_keyring_linux.so.1.0 00:04:46.834 LIB libspdk_blobfs_bdev.a 00:04:46.834 SYMLINK libspdk_fsdev_aio.so 00:04:46.834 SO libspdk_blobfs_bdev.so.6.0 00:04:46.834 LIB libspdk_sock_posix.a 00:04:46.834 SYMLINK libspdk_keyring_linux.so 00:04:46.834 SO libspdk_sock_posix.so.6.0 00:04:46.834 SYMLINK libspdk_blobfs_bdev.so 00:04:46.834 LIB libspdk_bdev_error.a 00:04:46.834 SO libspdk_bdev_error.so.6.0 00:04:46.834 SYMLINK libspdk_sock_posix.so 00:04:46.834 CC module/bdev/malloc/bdev_malloc.o 00:04:46.834 SYMLINK libspdk_bdev_error.so 00:04:46.834 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:46.834 CC module/bdev/null/bdev_null.o 00:04:46.834 LIB libspdk_bdev_delay.a 00:04:46.834 CC module/bdev/nvme/bdev_nvme.o 00:04:46.834 CC module/bdev/passthru/vbdev_passthru.o 00:04:46.834 SO libspdk_bdev_delay.so.6.0 00:04:46.834 LIB libspdk_bdev_gpt.a 00:04:47.090 SO libspdk_bdev_gpt.so.6.0 00:04:47.090 CC module/bdev/raid/bdev_raid.o 00:04:47.090 SYMLINK libspdk_bdev_delay.so 00:04:47.090 CC module/bdev/split/vbdev_split.o 00:04:47.090 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:47.090 CC module/bdev/split/vbdev_split_rpc.o 00:04:47.090 SYMLINK libspdk_bdev_gpt.so 00:04:47.090 CC module/bdev/raid/bdev_raid_rpc.o 00:04:47.090 CC module/bdev/raid/bdev_raid_sb.o 00:04:47.090 CC module/bdev/null/bdev_null_rpc.o 00:04:47.090 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:47.090 CC module/bdev/raid/raid0.o 00:04:47.090 LIB libspdk_bdev_split.a 00:04:47.090 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:47.347 LIB libspdk_bdev_null.a 00:04:47.347 SO libspdk_bdev_split.so.6.0 00:04:47.347 LIB libspdk_bdev_malloc.a 00:04:47.347 SO libspdk_bdev_null.so.6.0 00:04:47.347 SO libspdk_bdev_malloc.so.6.0 00:04:47.347 SYMLINK libspdk_bdev_split.so 00:04:47.347 SYMLINK libspdk_bdev_null.so 00:04:47.347 CC module/bdev/raid/raid1.o 00:04:47.347 CC module/bdev/raid/concat.o 00:04:47.347 SYMLINK libspdk_bdev_malloc.so 00:04:47.347 CC module/bdev/nvme/nvme_rpc.o 00:04:47.347 LIB libspdk_bdev_passthru.a 00:04:47.347 LIB libspdk_bdev_lvol.a 00:04:47.347 SO libspdk_bdev_passthru.so.6.0 00:04:47.347 SO libspdk_bdev_lvol.so.6.0 00:04:47.347 SYMLINK libspdk_bdev_passthru.so 00:04:47.347 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:47.347 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:47.604 SYMLINK libspdk_bdev_lvol.so 00:04:47.604 CC module/bdev/xnvme/bdev_xnvme.o 00:04:47.604 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:47.604 CC module/bdev/nvme/bdev_mdns_client.o 00:04:47.604 CC module/bdev/aio/bdev_aio.o 00:04:47.604 CC module/bdev/aio/bdev_aio_rpc.o 00:04:47.604 CC module/bdev/ftl/bdev_ftl.o 00:04:47.604 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:47.604 LIB libspdk_bdev_zone_block.a 00:04:47.604 LIB libspdk_bdev_xnvme.a 00:04:47.861 CC module/bdev/nvme/vbdev_opal.o 00:04:47.861 SO libspdk_bdev_zone_block.so.6.0 00:04:47.861 SO libspdk_bdev_xnvme.so.3.0 00:04:47.861 SYMLINK libspdk_bdev_zone_block.so 00:04:47.861 SYMLINK libspdk_bdev_xnvme.so 00:04:47.861 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:47.861 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:47.861 CC module/bdev/iscsi/bdev_iscsi.o 00:04:47.861 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:47.861 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:47.861 LIB libspdk_bdev_ftl.a 00:04:47.861 LIB libspdk_bdev_aio.a 00:04:47.861 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:47.861 SO libspdk_bdev_ftl.so.6.0 00:04:47.861 SO libspdk_bdev_aio.so.6.0 00:04:47.861 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:48.118 SYMLINK libspdk_bdev_ftl.so 00:04:48.118 SYMLINK libspdk_bdev_aio.so 00:04:48.118 LIB libspdk_bdev_raid.a 00:04:48.118 SO libspdk_bdev_raid.so.6.0 00:04:48.118 SYMLINK libspdk_bdev_raid.so 00:04:48.375 LIB libspdk_bdev_iscsi.a 00:04:48.375 SO libspdk_bdev_iscsi.so.6.0 00:04:48.375 SYMLINK libspdk_bdev_iscsi.so 00:04:48.375 LIB libspdk_bdev_virtio.a 00:04:48.375 SO libspdk_bdev_virtio.so.6.0 00:04:48.375 SYMLINK libspdk_bdev_virtio.so 00:04:48.940 LIB libspdk_bdev_nvme.a 00:04:48.940 SO libspdk_bdev_nvme.so.7.0 00:04:49.197 SYMLINK libspdk_bdev_nvme.so 00:04:49.457 CC module/event/subsystems/vmd/vmd.o 00:04:49.457 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:49.457 CC module/event/subsystems/keyring/keyring.o 00:04:49.457 CC module/event/subsystems/iobuf/iobuf.o 00:04:49.457 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:49.457 CC module/event/subsystems/sock/sock.o 00:04:49.457 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:49.457 CC module/event/subsystems/scheduler/scheduler.o 00:04:49.457 CC module/event/subsystems/fsdev/fsdev.o 00:04:49.457 LIB libspdk_event_keyring.a 00:04:49.457 SO libspdk_event_keyring.so.1.0 00:04:49.457 LIB libspdk_event_fsdev.a 00:04:49.457 LIB libspdk_event_vmd.a 00:04:49.457 LIB libspdk_event_vhost_blk.a 00:04:49.457 LIB libspdk_event_scheduler.a 00:04:49.457 SO libspdk_event_fsdev.so.1.0 00:04:49.457 LIB libspdk_event_iobuf.a 00:04:49.457 LIB libspdk_event_sock.a 00:04:49.716 SO libspdk_event_vhost_blk.so.3.0 00:04:49.716 SO libspdk_event_vmd.so.6.0 00:04:49.716 SO libspdk_event_scheduler.so.4.0 00:04:49.716 SYMLINK libspdk_event_keyring.so 00:04:49.716 SO libspdk_event_sock.so.5.0 00:04:49.716 SO libspdk_event_iobuf.so.3.0 00:04:49.716 SYMLINK libspdk_event_fsdev.so 00:04:49.716 SYMLINK libspdk_event_vhost_blk.so 00:04:49.716 SYMLINK libspdk_event_vmd.so 00:04:49.716 SYMLINK libspdk_event_scheduler.so 00:04:49.716 SYMLINK libspdk_event_sock.so 00:04:49.716 SYMLINK libspdk_event_iobuf.so 00:04:49.974 CC module/event/subsystems/accel/accel.o 00:04:49.974 LIB libspdk_event_accel.a 00:04:49.974 SO libspdk_event_accel.so.6.0 00:04:49.974 SYMLINK libspdk_event_accel.so 00:04:50.232 CC module/event/subsystems/bdev/bdev.o 00:04:50.491 LIB libspdk_event_bdev.a 00:04:50.491 SO libspdk_event_bdev.so.6.0 00:04:50.491 SYMLINK libspdk_event_bdev.so 00:04:50.749 CC module/event/subsystems/ublk/ublk.o 00:04:50.749 CC module/event/subsystems/scsi/scsi.o 00:04:50.749 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:50.749 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:50.749 CC module/event/subsystems/nbd/nbd.o 00:04:50.749 LIB libspdk_event_ublk.a 00:04:50.749 LIB libspdk_event_scsi.a 00:04:50.749 LIB libspdk_event_nbd.a 00:04:50.749 SO libspdk_event_ublk.so.3.0 00:04:50.749 SO libspdk_event_nbd.so.6.0 00:04:50.749 SO libspdk_event_scsi.so.6.0 00:04:51.007 SYMLINK libspdk_event_ublk.so 00:04:51.008 SYMLINK libspdk_event_nbd.so 00:04:51.008 SYMLINK libspdk_event_scsi.so 00:04:51.008 LIB libspdk_event_nvmf.a 00:04:51.008 SO libspdk_event_nvmf.so.6.0 00:04:51.008 SYMLINK libspdk_event_nvmf.so 00:04:51.008 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:51.008 CC module/event/subsystems/iscsi/iscsi.o 00:04:51.266 LIB libspdk_event_vhost_scsi.a 00:04:51.266 LIB libspdk_event_iscsi.a 00:04:51.266 SO libspdk_event_iscsi.so.6.0 00:04:51.266 SO libspdk_event_vhost_scsi.so.3.0 00:04:51.266 SYMLINK libspdk_event_vhost_scsi.so 00:04:51.266 SYMLINK libspdk_event_iscsi.so 00:04:51.525 SO libspdk.so.6.0 00:04:51.525 SYMLINK libspdk.so 00:04:51.525 TEST_HEADER include/spdk/accel.h 00:04:51.525 TEST_HEADER include/spdk/accel_module.h 00:04:51.525 CC app/trace_record/trace_record.o 00:04:51.525 TEST_HEADER include/spdk/assert.h 00:04:51.525 CC test/rpc_client/rpc_client_test.o 00:04:51.525 TEST_HEADER include/spdk/barrier.h 00:04:51.525 TEST_HEADER include/spdk/base64.h 00:04:51.525 CXX app/trace/trace.o 00:04:51.525 TEST_HEADER include/spdk/bdev.h 00:04:51.525 TEST_HEADER include/spdk/bdev_module.h 00:04:51.525 TEST_HEADER include/spdk/bdev_zone.h 00:04:51.525 TEST_HEADER include/spdk/bit_array.h 00:04:51.525 TEST_HEADER include/spdk/bit_pool.h 00:04:51.525 TEST_HEADER include/spdk/blob_bdev.h 00:04:51.525 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:51.525 TEST_HEADER include/spdk/blobfs.h 00:04:51.525 TEST_HEADER include/spdk/blob.h 00:04:51.525 TEST_HEADER include/spdk/conf.h 00:04:51.525 TEST_HEADER include/spdk/config.h 00:04:51.525 TEST_HEADER include/spdk/cpuset.h 00:04:51.525 TEST_HEADER include/spdk/crc16.h 00:04:51.525 TEST_HEADER include/spdk/crc32.h 00:04:51.525 TEST_HEADER include/spdk/crc64.h 00:04:51.525 TEST_HEADER include/spdk/dif.h 00:04:51.525 TEST_HEADER include/spdk/dma.h 00:04:51.525 TEST_HEADER include/spdk/endian.h 00:04:51.525 TEST_HEADER include/spdk/env_dpdk.h 00:04:51.525 TEST_HEADER include/spdk/env.h 00:04:51.525 TEST_HEADER include/spdk/event.h 00:04:51.525 CC app/nvmf_tgt/nvmf_main.o 00:04:51.525 TEST_HEADER include/spdk/fd_group.h 00:04:51.525 TEST_HEADER include/spdk/fd.h 00:04:51.525 TEST_HEADER include/spdk/file.h 00:04:51.525 TEST_HEADER include/spdk/fsdev.h 00:04:51.525 TEST_HEADER include/spdk/fsdev_module.h 00:04:51.525 TEST_HEADER include/spdk/ftl.h 00:04:51.525 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:51.525 TEST_HEADER include/spdk/gpt_spec.h 00:04:51.525 TEST_HEADER include/spdk/hexlify.h 00:04:51.525 TEST_HEADER include/spdk/histogram_data.h 00:04:51.525 TEST_HEADER include/spdk/idxd.h 00:04:51.525 TEST_HEADER include/spdk/idxd_spec.h 00:04:51.525 TEST_HEADER include/spdk/init.h 00:04:51.525 TEST_HEADER include/spdk/ioat.h 00:04:51.525 TEST_HEADER include/spdk/ioat_spec.h 00:04:51.525 TEST_HEADER include/spdk/iscsi_spec.h 00:04:51.525 TEST_HEADER include/spdk/json.h 00:04:51.525 TEST_HEADER include/spdk/jsonrpc.h 00:04:51.525 TEST_HEADER include/spdk/keyring.h 00:04:51.525 TEST_HEADER include/spdk/keyring_module.h 00:04:51.525 TEST_HEADER include/spdk/likely.h 00:04:51.525 TEST_HEADER include/spdk/log.h 00:04:51.525 CC test/thread/poller_perf/poller_perf.o 00:04:51.525 TEST_HEADER include/spdk/lvol.h 00:04:51.525 CC examples/util/zipf/zipf.o 00:04:51.525 TEST_HEADER include/spdk/md5.h 00:04:51.525 TEST_HEADER include/spdk/memory.h 00:04:51.525 TEST_HEADER include/spdk/mmio.h 00:04:51.525 TEST_HEADER include/spdk/nbd.h 00:04:51.525 TEST_HEADER include/spdk/net.h 00:04:51.525 TEST_HEADER include/spdk/notify.h 00:04:51.525 TEST_HEADER include/spdk/nvme.h 00:04:51.525 CC test/dma/test_dma/test_dma.o 00:04:51.525 TEST_HEADER include/spdk/nvme_intel.h 00:04:51.525 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:51.525 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:51.525 TEST_HEADER include/spdk/nvme_spec.h 00:04:51.784 TEST_HEADER include/spdk/nvme_zns.h 00:04:51.784 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:51.784 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:51.784 TEST_HEADER include/spdk/nvmf.h 00:04:51.784 TEST_HEADER include/spdk/nvmf_spec.h 00:04:51.784 TEST_HEADER include/spdk/nvmf_transport.h 00:04:51.784 TEST_HEADER include/spdk/opal.h 00:04:51.784 TEST_HEADER include/spdk/opal_spec.h 00:04:51.784 TEST_HEADER include/spdk/pci_ids.h 00:04:51.784 TEST_HEADER include/spdk/pipe.h 00:04:51.784 TEST_HEADER include/spdk/queue.h 00:04:51.784 TEST_HEADER include/spdk/reduce.h 00:04:51.784 TEST_HEADER include/spdk/rpc.h 00:04:51.784 TEST_HEADER include/spdk/scheduler.h 00:04:51.784 TEST_HEADER include/spdk/scsi.h 00:04:51.784 TEST_HEADER include/spdk/scsi_spec.h 00:04:51.784 TEST_HEADER include/spdk/sock.h 00:04:51.784 TEST_HEADER include/spdk/stdinc.h 00:04:51.784 TEST_HEADER include/spdk/string.h 00:04:51.784 TEST_HEADER include/spdk/thread.h 00:04:51.784 TEST_HEADER include/spdk/trace.h 00:04:51.784 CC test/app/bdev_svc/bdev_svc.o 00:04:51.784 TEST_HEADER include/spdk/trace_parser.h 00:04:51.784 CC test/env/mem_callbacks/mem_callbacks.o 00:04:51.784 TEST_HEADER include/spdk/tree.h 00:04:51.784 TEST_HEADER include/spdk/ublk.h 00:04:51.784 TEST_HEADER include/spdk/util.h 00:04:51.784 TEST_HEADER include/spdk/uuid.h 00:04:51.784 TEST_HEADER include/spdk/version.h 00:04:51.784 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:51.784 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:51.784 TEST_HEADER include/spdk/vhost.h 00:04:51.784 TEST_HEADER include/spdk/vmd.h 00:04:51.784 TEST_HEADER include/spdk/xor.h 00:04:51.784 TEST_HEADER include/spdk/zipf.h 00:04:51.784 CXX test/cpp_headers/accel.o 00:04:51.784 LINK rpc_client_test 00:04:51.784 LINK zipf 00:04:51.784 LINK poller_perf 00:04:51.784 LINK nvmf_tgt 00:04:51.784 LINK spdk_trace_record 00:04:51.784 LINK bdev_svc 00:04:51.784 CXX test/cpp_headers/accel_module.o 00:04:52.043 LINK spdk_trace 00:04:52.043 CC test/env/vtophys/vtophys.o 00:04:52.043 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:52.043 CC examples/ioat/perf/perf.o 00:04:52.043 CC examples/vmd/lsvmd/lsvmd.o 00:04:52.043 CXX test/cpp_headers/assert.o 00:04:52.043 LINK test_dma 00:04:52.043 CC test/event/event_perf/event_perf.o 00:04:52.043 LINK vtophys 00:04:52.043 LINK lsvmd 00:04:52.043 CXX test/cpp_headers/barrier.o 00:04:52.043 LINK env_dpdk_post_init 00:04:52.043 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:52.043 LINK ioat_perf 00:04:52.043 CC app/iscsi_tgt/iscsi_tgt.o 00:04:52.043 LINK mem_callbacks 00:04:52.302 LINK event_perf 00:04:52.302 CXX test/cpp_headers/base64.o 00:04:52.302 CC test/app/histogram_perf/histogram_perf.o 00:04:52.302 CC examples/vmd/led/led.o 00:04:52.302 LINK iscsi_tgt 00:04:52.302 CC examples/ioat/verify/verify.o 00:04:52.302 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:52.302 CXX test/cpp_headers/bdev.o 00:04:52.302 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:52.302 CC test/env/memory/memory_ut.o 00:04:52.302 CC test/event/reactor/reactor.o 00:04:52.302 LINK histogram_perf 00:04:52.302 LINK led 00:04:52.560 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:52.560 CXX test/cpp_headers/bdev_module.o 00:04:52.560 LINK reactor 00:04:52.560 CXX test/cpp_headers/bdev_zone.o 00:04:52.560 LINK verify 00:04:52.560 LINK nvme_fuzz 00:04:52.560 CC app/spdk_lspci/spdk_lspci.o 00:04:52.560 CC app/spdk_tgt/spdk_tgt.o 00:04:52.560 CC test/event/reactor_perf/reactor_perf.o 00:04:52.560 CXX test/cpp_headers/bit_array.o 00:04:52.560 LINK spdk_lspci 00:04:52.819 CC test/event/app_repeat/app_repeat.o 00:04:52.819 CC examples/idxd/perf/perf.o 00:04:52.819 CC test/event/scheduler/scheduler.o 00:04:52.819 CXX test/cpp_headers/bit_pool.o 00:04:52.819 LINK reactor_perf 00:04:52.819 LINK spdk_tgt 00:04:52.819 CXX test/cpp_headers/blob_bdev.o 00:04:52.819 LINK app_repeat 00:04:52.819 LINK vhost_fuzz 00:04:53.077 CC test/env/pci/pci_ut.o 00:04:53.077 CXX test/cpp_headers/blobfs_bdev.o 00:04:53.077 LINK scheduler 00:04:53.077 CC app/spdk_nvme_perf/perf.o 00:04:53.077 CC test/app/jsoncat/jsoncat.o 00:04:53.077 CC test/app/stub/stub.o 00:04:53.077 LINK idxd_perf 00:04:53.077 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:53.077 CXX test/cpp_headers/blobfs.o 00:04:53.077 LINK jsoncat 00:04:53.336 CC app/spdk_nvme_identify/identify.o 00:04:53.336 LINK stub 00:04:53.336 LINK interrupt_tgt 00:04:53.336 CXX test/cpp_headers/blob.o 00:04:53.336 CXX test/cpp_headers/conf.o 00:04:53.336 CC examples/thread/thread/thread_ex.o 00:04:53.336 LINK pci_ut 00:04:53.336 CXX test/cpp_headers/config.o 00:04:53.336 CXX test/cpp_headers/cpuset.o 00:04:53.336 LINK memory_ut 00:04:53.336 CXX test/cpp_headers/crc16.o 00:04:53.593 CXX test/cpp_headers/crc32.o 00:04:53.593 CXX test/cpp_headers/crc64.o 00:04:53.593 CC test/blobfs/mkfs/mkfs.o 00:04:53.593 CC test/accel/dif/dif.o 00:04:53.593 LINK thread 00:04:53.593 CC test/nvme/aer/aer.o 00:04:53.593 CXX test/cpp_headers/dif.o 00:04:53.851 CC test/nvme/reset/reset.o 00:04:53.851 CC test/lvol/esnap/esnap.o 00:04:53.851 LINK mkfs 00:04:53.851 LINK spdk_nvme_perf 00:04:53.851 CXX test/cpp_headers/dma.o 00:04:53.851 CC examples/sock/hello_world/hello_sock.o 00:04:53.851 CC test/nvme/sgl/sgl.o 00:04:53.851 LINK aer 00:04:53.851 LINK reset 00:04:54.109 CXX test/cpp_headers/endian.o 00:04:54.109 LINK spdk_nvme_identify 00:04:54.109 LINK iscsi_fuzz 00:04:54.109 LINK hello_sock 00:04:54.109 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:54.109 CXX test/cpp_headers/env_dpdk.o 00:04:54.109 CC test/nvme/e2edp/nvme_dp.o 00:04:54.109 CC app/spdk_nvme_discover/discovery_aer.o 00:04:54.109 LINK sgl 00:04:54.109 CC test/nvme/overhead/overhead.o 00:04:54.109 CXX test/cpp_headers/env.o 00:04:54.367 CC examples/accel/perf/accel_perf.o 00:04:54.367 LINK dif 00:04:54.367 CXX test/cpp_headers/event.o 00:04:54.367 LINK hello_fsdev 00:04:54.367 LINK spdk_nvme_discover 00:04:54.367 CC examples/blob/hello_world/hello_blob.o 00:04:54.367 LINK nvme_dp 00:04:54.367 CC examples/blob/cli/blobcli.o 00:04:54.367 CXX test/cpp_headers/fd_group.o 00:04:54.367 LINK overhead 00:04:54.625 CXX test/cpp_headers/fd.o 00:04:54.625 CC app/spdk_top/spdk_top.o 00:04:54.625 CXX test/cpp_headers/file.o 00:04:54.625 CC test/nvme/err_injection/err_injection.o 00:04:54.625 CC test/bdev/bdevio/bdevio.o 00:04:54.625 LINK hello_blob 00:04:54.625 CC examples/nvme/hello_world/hello_world.o 00:04:54.625 CXX test/cpp_headers/fsdev.o 00:04:54.884 LINK err_injection 00:04:54.884 CC examples/nvme/reconnect/reconnect.o 00:04:54.884 LINK accel_perf 00:04:54.884 LINK blobcli 00:04:54.884 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:54.884 LINK hello_world 00:04:54.884 CXX test/cpp_headers/fsdev_module.o 00:04:54.884 CC test/nvme/startup/startup.o 00:04:54.884 CC test/nvme/reserve/reserve.o 00:04:54.884 LINK bdevio 00:04:54.884 CXX test/cpp_headers/ftl.o 00:04:55.142 CC examples/nvme/arbitration/arbitration.o 00:04:55.142 CC app/vhost/vhost.o 00:04:55.142 LINK reconnect 00:04:55.142 LINK startup 00:04:55.142 CXX test/cpp_headers/fuse_dispatcher.o 00:04:55.142 LINK reserve 00:04:55.142 CC app/spdk_dd/spdk_dd.o 00:04:55.142 LINK vhost 00:04:55.401 CC examples/nvme/hotplug/hotplug.o 00:04:55.401 CXX test/cpp_headers/gpt_spec.o 00:04:55.401 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:55.401 LINK nvme_manage 00:04:55.401 CC test/nvme/simple_copy/simple_copy.o 00:04:55.401 LINK arbitration 00:04:55.401 CXX test/cpp_headers/hexlify.o 00:04:55.401 CC examples/nvme/abort/abort.o 00:04:55.401 LINK spdk_top 00:04:55.401 LINK cmb_copy 00:04:55.401 LINK hotplug 00:04:55.401 CC test/nvme/connect_stress/connect_stress.o 00:04:55.401 CC test/nvme/boot_partition/boot_partition.o 00:04:55.660 LINK simple_copy 00:04:55.660 LINK spdk_dd 00:04:55.660 CXX test/cpp_headers/histogram_data.o 00:04:55.660 CXX test/cpp_headers/idxd.o 00:04:55.660 LINK boot_partition 00:04:55.660 CC test/nvme/compliance/nvme_compliance.o 00:04:55.660 CC test/nvme/fused_ordering/fused_ordering.o 00:04:55.660 LINK connect_stress 00:04:55.660 CXX test/cpp_headers/idxd_spec.o 00:04:55.660 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:55.918 LINK abort 00:04:55.918 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:55.918 CXX test/cpp_headers/init.o 00:04:55.918 LINK fused_ordering 00:04:55.918 CC test/nvme/fdp/fdp.o 00:04:55.918 CC app/fio/nvme/fio_plugin.o 00:04:55.918 LINK pmr_persistence 00:04:55.918 CC app/fio/bdev/fio_plugin.o 00:04:55.918 CXX test/cpp_headers/ioat.o 00:04:55.918 LINK nvme_compliance 00:04:55.918 LINK doorbell_aers 00:04:55.918 CC test/nvme/cuse/cuse.o 00:04:55.918 CXX test/cpp_headers/ioat_spec.o 00:04:56.193 CXX test/cpp_headers/iscsi_spec.o 00:04:56.193 CXX test/cpp_headers/json.o 00:04:56.193 CC examples/bdev/hello_world/hello_bdev.o 00:04:56.193 CC examples/bdev/bdevperf/bdevperf.o 00:04:56.193 LINK fdp 00:04:56.193 CXX test/cpp_headers/jsonrpc.o 00:04:56.193 CXX test/cpp_headers/keyring.o 00:04:56.193 CXX test/cpp_headers/keyring_module.o 00:04:56.193 CXX test/cpp_headers/likely.o 00:04:56.193 CXX test/cpp_headers/log.o 00:04:56.193 CXX test/cpp_headers/lvol.o 00:04:56.193 CXX test/cpp_headers/md5.o 00:04:56.452 LINK hello_bdev 00:04:56.452 LINK spdk_nvme 00:04:56.452 CXX test/cpp_headers/memory.o 00:04:56.452 LINK spdk_bdev 00:04:56.452 CXX test/cpp_headers/mmio.o 00:04:56.452 CXX test/cpp_headers/nbd.o 00:04:56.452 CXX test/cpp_headers/net.o 00:04:56.452 CXX test/cpp_headers/notify.o 00:04:56.452 CXX test/cpp_headers/nvme.o 00:04:56.452 CXX test/cpp_headers/nvme_intel.o 00:04:56.452 CXX test/cpp_headers/nvme_ocssd.o 00:04:56.452 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:56.452 CXX test/cpp_headers/nvme_spec.o 00:04:56.711 CXX test/cpp_headers/nvme_zns.o 00:04:56.711 CXX test/cpp_headers/nvmf_cmd.o 00:04:56.711 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:56.711 CXX test/cpp_headers/nvmf.o 00:04:56.711 CXX test/cpp_headers/nvmf_spec.o 00:04:56.711 CXX test/cpp_headers/nvmf_transport.o 00:04:56.711 CXX test/cpp_headers/opal.o 00:04:56.711 CXX test/cpp_headers/opal_spec.o 00:04:56.711 LINK bdevperf 00:04:56.711 CXX test/cpp_headers/pci_ids.o 00:04:56.711 CXX test/cpp_headers/pipe.o 00:04:56.711 CXX test/cpp_headers/queue.o 00:04:56.711 CXX test/cpp_headers/reduce.o 00:04:56.711 CXX test/cpp_headers/rpc.o 00:04:56.711 CXX test/cpp_headers/scheduler.o 00:04:56.711 CXX test/cpp_headers/scsi.o 00:04:56.711 CXX test/cpp_headers/scsi_spec.o 00:04:56.969 CXX test/cpp_headers/sock.o 00:04:56.969 CXX test/cpp_headers/stdinc.o 00:04:56.969 CXX test/cpp_headers/string.o 00:04:56.969 CXX test/cpp_headers/thread.o 00:04:56.969 CXX test/cpp_headers/trace.o 00:04:56.969 CXX test/cpp_headers/trace_parser.o 00:04:56.969 CXX test/cpp_headers/tree.o 00:04:56.969 CXX test/cpp_headers/ublk.o 00:04:56.969 CXX test/cpp_headers/util.o 00:04:56.969 CXX test/cpp_headers/uuid.o 00:04:56.969 CXX test/cpp_headers/version.o 00:04:56.969 CXX test/cpp_headers/vfio_user_pci.o 00:04:56.969 CC examples/nvmf/nvmf/nvmf.o 00:04:56.969 CXX test/cpp_headers/vfio_user_spec.o 00:04:56.969 CXX test/cpp_headers/vhost.o 00:04:56.969 CXX test/cpp_headers/vmd.o 00:04:57.228 CXX test/cpp_headers/xor.o 00:04:57.228 CXX test/cpp_headers/zipf.o 00:04:57.228 LINK cuse 00:04:57.228 LINK nvmf 00:04:58.603 LINK esnap 00:04:58.861 00:04:58.861 real 1m2.422s 00:04:58.861 user 5m9.749s 00:04:58.861 sys 0m50.359s 00:04:58.861 14:12:40 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:58.861 14:12:40 make -- common/autotest_common.sh@10 -- $ set +x 00:04:58.861 ************************************ 00:04:58.861 END TEST make 00:04:58.861 ************************************ 00:04:58.861 14:12:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:58.861 14:12:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:58.861 14:12:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:58.861 14:12:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.861 14:12:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:58.861 14:12:40 -- pm/common@44 -- $ pid=5797 00:04:58.861 14:12:40 -- pm/common@50 -- $ kill -TERM 5797 00:04:58.861 14:12:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.861 14:12:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:58.861 14:12:40 -- pm/common@44 -- $ pid=5798 00:04:58.861 14:12:40 -- pm/common@50 -- $ kill -TERM 5798 00:04:59.119 14:12:40 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:59.119 14:12:40 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:59.119 14:12:40 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:59.119 14:12:40 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:59.119 14:12:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.119 14:12:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.119 14:12:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.119 14:12:40 -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.119 14:12:40 -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.119 14:12:40 -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.119 14:12:40 -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.119 14:12:40 -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.119 14:12:40 -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.119 14:12:40 -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.119 14:12:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.119 14:12:40 -- scripts/common.sh@344 -- # case "$op" in 00:04:59.119 14:12:40 -- scripts/common.sh@345 -- # : 1 00:04:59.120 14:12:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.120 14:12:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.120 14:12:40 -- scripts/common.sh@365 -- # decimal 1 00:04:59.120 14:12:40 -- scripts/common.sh@353 -- # local d=1 00:04:59.120 14:12:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.120 14:12:40 -- scripts/common.sh@355 -- # echo 1 00:04:59.120 14:12:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.120 14:12:40 -- scripts/common.sh@366 -- # decimal 2 00:04:59.120 14:12:40 -- scripts/common.sh@353 -- # local d=2 00:04:59.120 14:12:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.120 14:12:40 -- scripts/common.sh@355 -- # echo 2 00:04:59.120 14:12:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.120 14:12:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.120 14:12:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.120 14:12:40 -- scripts/common.sh@368 -- # return 0 00:04:59.120 14:12:40 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.120 14:12:40 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:59.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.120 --rc genhtml_branch_coverage=1 00:04:59.120 --rc genhtml_function_coverage=1 00:04:59.120 --rc genhtml_legend=1 00:04:59.120 --rc geninfo_all_blocks=1 00:04:59.120 --rc geninfo_unexecuted_blocks=1 00:04:59.120 00:04:59.120 ' 00:04:59.120 14:12:40 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:59.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.120 --rc genhtml_branch_coverage=1 00:04:59.120 --rc genhtml_function_coverage=1 00:04:59.120 --rc genhtml_legend=1 00:04:59.120 --rc geninfo_all_blocks=1 00:04:59.120 --rc geninfo_unexecuted_blocks=1 00:04:59.120 00:04:59.120 ' 00:04:59.120 14:12:40 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:59.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.120 --rc genhtml_branch_coverage=1 00:04:59.120 --rc genhtml_function_coverage=1 00:04:59.120 --rc genhtml_legend=1 00:04:59.120 --rc geninfo_all_blocks=1 00:04:59.120 --rc geninfo_unexecuted_blocks=1 00:04:59.120 00:04:59.120 ' 00:04:59.120 14:12:40 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:59.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.120 --rc genhtml_branch_coverage=1 00:04:59.120 --rc genhtml_function_coverage=1 00:04:59.120 --rc genhtml_legend=1 00:04:59.120 --rc geninfo_all_blocks=1 00:04:59.120 --rc geninfo_unexecuted_blocks=1 00:04:59.120 00:04:59.120 ' 00:04:59.120 14:12:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.120 14:12:40 -- nvmf/common.sh@7 -- # uname -s 00:04:59.120 14:12:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.120 14:12:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.120 14:12:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.120 14:12:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.120 14:12:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.120 14:12:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.120 14:12:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.120 14:12:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.120 14:12:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.120 14:12:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.120 14:12:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b97862e-3ac3-467d-953d-42cb848625fb 00:04:59.120 14:12:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=2b97862e-3ac3-467d-953d-42cb848625fb 00:04:59.120 14:12:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.120 14:12:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.120 14:12:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.120 14:12:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.120 14:12:40 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.120 14:12:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.120 14:12:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.120 14:12:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.120 14:12:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.120 14:12:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.120 14:12:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.120 14:12:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.120 14:12:40 -- paths/export.sh@5 -- # export PATH 00:04:59.120 14:12:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.120 14:12:40 -- nvmf/common.sh@51 -- # : 0 00:04:59.120 14:12:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.120 14:12:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.120 14:12:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.120 14:12:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.120 14:12:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.120 14:12:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.120 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.120 14:12:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.120 14:12:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.120 14:12:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.120 14:12:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:59.120 14:12:40 -- spdk/autotest.sh@32 -- # uname -s 00:04:59.120 14:12:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:59.120 14:12:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:59.120 14:12:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:59.120 14:12:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:59.120 14:12:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:59.120 14:12:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:59.120 14:12:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:59.120 14:12:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:59.120 14:12:40 -- spdk/autotest.sh@48 -- # udevadm_pid=66922 00:04:59.120 14:12:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:59.120 14:12:40 -- pm/common@17 -- # local monitor 00:04:59.120 14:12:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.120 14:12:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.120 14:12:40 -- pm/common@25 -- # sleep 1 00:04:59.120 14:12:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:59.120 14:12:40 -- pm/common@21 -- # date +%s 00:04:59.120 14:12:40 -- pm/common@21 -- # date +%s 00:04:59.120 14:12:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732889560 00:04:59.120 14:12:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732889560 00:04:59.120 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732889560_collect-vmstat.pm.log 00:04:59.120 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732889560_collect-cpu-load.pm.log 00:05:00.055 14:12:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:00.055 14:12:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:00.055 14:12:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.055 14:12:41 -- common/autotest_common.sh@10 -- # set +x 00:05:00.055 14:12:41 -- spdk/autotest.sh@59 -- # create_test_list 00:05:00.055 14:12:41 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:00.055 14:12:41 -- common/autotest_common.sh@10 -- # set +x 00:05:00.313 14:12:41 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:00.313 14:12:41 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:00.313 14:12:41 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:00.313 14:12:41 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:00.313 14:12:41 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:00.313 14:12:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:00.313 14:12:41 -- common/autotest_common.sh@1455 -- # uname 00:05:00.313 14:12:41 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:00.313 14:12:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:00.313 14:12:41 -- common/autotest_common.sh@1475 -- # uname 00:05:00.313 14:12:41 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:00.313 14:12:41 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:00.313 14:12:41 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:00.313 lcov: LCOV version 1.15 00:05:00.313 14:12:41 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:15.206 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:15.206 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:30.100 14:13:11 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:30.100 14:13:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:30.100 14:13:11 -- common/autotest_common.sh@10 -- # set +x 00:05:30.100 14:13:11 -- spdk/autotest.sh@78 -- # rm -f 00:05:30.100 14:13:11 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.934 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:30.934 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:30.934 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:30.934 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:30.934 14:13:12 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:30.934 14:13:12 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:30.934 14:13:12 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:30.934 14:13:12 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:30.934 14:13:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.934 14:13:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:30.934 14:13:12 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:30.934 14:13:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.934 14:13:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.934 14:13:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.934 14:13:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:30.934 14:13:12 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:30.934 14:13:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:30.934 14:13:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.934 14:13:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.934 14:13:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:30.934 14:13:12 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:30.934 14:13:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:30.934 14:13:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.934 14:13:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.934 14:13:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:30.934 14:13:12 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:30.934 14:13:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:30.934 14:13:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.934 14:13:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.934 14:13:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:30.934 14:13:12 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:30.934 14:13:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:30.934 14:13:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.934 14:13:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.934 14:13:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:30.934 14:13:12 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:30.934 14:13:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:30.934 14:13:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.934 14:13:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.934 14:13:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:30.934 14:13:12 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:30.934 14:13:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:30.934 14:13:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.934 14:13:12 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:30.934 14:13:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.934 14:13:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.934 14:13:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:30.934 14:13:12 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:30.935 14:13:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:30.935 No valid GPT data, bailing 00:05:30.935 14:13:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:30.935 14:13:12 -- scripts/common.sh@394 -- # pt= 00:05:30.935 14:13:12 -- scripts/common.sh@395 -- # return 1 00:05:30.935 14:13:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:30.935 1+0 records in 00:05:30.935 1+0 records out 00:05:30.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284379 s, 36.9 MB/s 00:05:30.935 14:13:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.935 14:13:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.935 14:13:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:30.935 14:13:12 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:30.935 14:13:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:30.935 No valid GPT data, bailing 00:05:30.935 14:13:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:31.195 14:13:12 -- scripts/common.sh@394 -- # pt= 00:05:31.195 14:13:12 -- scripts/common.sh@395 -- # return 1 00:05:31.195 14:13:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:31.195 1+0 records in 00:05:31.195 1+0 records out 00:05:31.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507836 s, 206 MB/s 00:05:31.195 14:13:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.195 14:13:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.195 14:13:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:31.196 14:13:12 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:31.196 14:13:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:31.196 No valid GPT data, bailing 00:05:31.196 14:13:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:31.196 14:13:12 -- scripts/common.sh@394 -- # pt= 00:05:31.196 14:13:12 -- scripts/common.sh@395 -- # return 1 00:05:31.196 14:13:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:31.196 1+0 records in 00:05:31.196 1+0 records out 00:05:31.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00569973 s, 184 MB/s 00:05:31.196 14:13:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.196 14:13:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.196 14:13:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:31.196 14:13:12 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:31.196 14:13:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:31.196 No valid GPT data, bailing 00:05:31.196 14:13:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:31.196 14:13:12 -- scripts/common.sh@394 -- # pt= 00:05:31.196 14:13:12 -- scripts/common.sh@395 -- # return 1 00:05:31.196 14:13:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:31.196 1+0 records in 00:05:31.196 1+0 records out 00:05:31.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00579356 s, 181 MB/s 00:05:31.196 14:13:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.196 14:13:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.196 14:13:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:31.196 14:13:12 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:31.196 14:13:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:31.196 No valid GPT data, bailing 00:05:31.196 14:13:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:31.196 14:13:12 -- scripts/common.sh@394 -- # pt= 00:05:31.196 14:13:12 -- scripts/common.sh@395 -- # return 1 00:05:31.196 14:13:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:31.455 1+0 records in 00:05:31.455 1+0 records out 00:05:31.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00557993 s, 188 MB/s 00:05:31.455 14:13:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.455 14:13:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.455 14:13:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:31.455 14:13:12 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:31.455 14:13:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:31.455 No valid GPT data, bailing 00:05:31.455 14:13:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:31.455 14:13:13 -- scripts/common.sh@394 -- # pt= 00:05:31.455 14:13:13 -- scripts/common.sh@395 -- # return 1 00:05:31.455 14:13:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:31.455 1+0 records in 00:05:31.455 1+0 records out 00:05:31.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00638521 s, 164 MB/s 00:05:31.455 14:13:13 -- spdk/autotest.sh@105 -- # sync 00:05:31.455 14:13:13 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:31.455 14:13:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:31.455 14:13:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:33.366 14:13:14 -- spdk/autotest.sh@111 -- # uname -s 00:05:33.366 14:13:14 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:33.366 14:13:14 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:33.366 14:13:14 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:33.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.197 Hugepages 00:05:34.197 node hugesize free / total 00:05:34.197 node0 1048576kB 0 / 0 00:05:34.197 node0 2048kB 0 / 0 00:05:34.197 00:05:34.197 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:34.197 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:34.197 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:34.197 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:34.197 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:34.457 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:34.457 14:13:16 -- spdk/autotest.sh@117 -- # uname -s 00:05:34.457 14:13:16 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:34.457 14:13:16 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:34.457 14:13:16 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.029 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.600 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.600 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.600 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.600 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.860 14:13:17 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:36.889 14:13:18 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:36.889 14:13:18 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:36.889 14:13:18 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:36.889 14:13:18 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:36.889 14:13:18 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:36.889 14:13:18 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:36.889 14:13:18 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.889 14:13:18 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:36.889 14:13:18 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:36.889 14:13:18 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:36.889 14:13:18 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:36.889 14:13:18 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.409 Waiting for block devices as requested 00:05:37.409 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.409 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.409 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.671 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.980 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:42.980 14:13:24 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:42.980 14:13:24 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:42.980 14:13:24 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.980 14:13:24 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:42.980 14:13:24 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:42.980 14:13:24 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:42.980 14:13:24 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:42.980 14:13:24 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:42.980 14:13:24 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:42.980 14:13:24 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:42.980 14:13:24 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:42.980 14:13:24 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1541 -- # continue 00:05:42.980 14:13:24 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:42.980 14:13:24 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:42.980 14:13:24 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.980 14:13:24 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:42.980 14:13:24 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:42.980 14:13:24 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:42.980 14:13:24 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:42.980 14:13:24 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:42.980 14:13:24 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:42.980 14:13:24 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:42.980 14:13:24 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:42.980 14:13:24 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1541 -- # continue 00:05:42.980 14:13:24 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:42.980 14:13:24 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:42.980 14:13:24 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.980 14:13:24 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:42.980 14:13:24 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:42.980 14:13:24 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:42.980 14:13:24 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:42.980 14:13:24 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:42.980 14:13:24 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:42.980 14:13:24 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:42.980 14:13:24 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:42.980 14:13:24 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1541 -- # continue 00:05:42.980 14:13:24 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:42.980 14:13:24 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:42.980 14:13:24 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:42.980 14:13:24 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.980 14:13:24 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:42.980 14:13:24 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:42.980 14:13:24 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:42.980 14:13:24 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:42.980 14:13:24 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:42.980 14:13:24 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:42.980 14:13:24 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:42.980 14:13:24 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:42.980 14:13:24 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:42.980 14:13:24 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:42.980 14:13:24 -- common/autotest_common.sh@1541 -- # continue 00:05:42.980 14:13:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:42.980 14:13:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.980 14:13:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.980 14:13:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:42.980 14:13:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.980 14:13:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.980 14:13:24 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.798 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.798 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.798 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.798 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.798 14:13:25 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:43.798 14:13:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.798 14:13:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.798 14:13:25 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:43.798 14:13:25 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:43.798 14:13:25 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:43.798 14:13:25 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:43.798 14:13:25 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:43.798 14:13:25 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:43.798 14:13:25 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:43.798 14:13:25 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:43.798 14:13:25 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:43.798 14:13:25 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:43.798 14:13:25 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.798 14:13:25 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:43.798 14:13:25 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:43.798 14:13:25 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:43.798 14:13:25 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:43.798 14:13:25 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:43.798 14:13:25 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:43.798 14:13:25 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:43.798 14:13:25 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.798 14:13:25 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:43.798 14:13:25 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:43.798 14:13:25 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:43.798 14:13:25 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.798 14:13:25 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:43.798 14:13:25 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:43.798 14:13:25 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:43.798 14:13:25 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.798 14:13:25 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:43.798 14:13:25 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:43.798 14:13:25 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:43.798 14:13:25 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.798 14:13:25 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:43.798 14:13:25 -- common/autotest_common.sh@1570 -- # return 0 00:05:43.798 14:13:25 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:43.798 14:13:25 -- common/autotest_common.sh@1578 -- # return 0 00:05:43.798 14:13:25 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:43.798 14:13:25 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:43.798 14:13:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:43.798 14:13:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:43.798 14:13:25 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:43.798 14:13:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.798 14:13:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.798 14:13:25 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:43.798 14:13:25 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.798 14:13:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.798 14:13:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.798 14:13:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.798 ************************************ 00:05:43.798 START TEST env 00:05:43.798 ************************************ 00:05:43.798 14:13:25 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:44.058 * Looking for test storage... 00:05:44.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:44.058 14:13:25 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:44.058 14:13:25 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:44.058 14:13:25 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:44.058 14:13:25 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:44.058 14:13:25 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.058 14:13:25 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.058 14:13:25 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.058 14:13:25 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.058 14:13:25 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.058 14:13:25 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.058 14:13:25 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.058 14:13:25 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.058 14:13:25 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.058 14:13:25 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.058 14:13:25 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.058 14:13:25 env -- scripts/common.sh@344 -- # case "$op" in 00:05:44.058 14:13:25 env -- scripts/common.sh@345 -- # : 1 00:05:44.058 14:13:25 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.058 14:13:25 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.058 14:13:25 env -- scripts/common.sh@365 -- # decimal 1 00:05:44.058 14:13:25 env -- scripts/common.sh@353 -- # local d=1 00:05:44.058 14:13:25 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.058 14:13:25 env -- scripts/common.sh@355 -- # echo 1 00:05:44.058 14:13:25 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.058 14:13:25 env -- scripts/common.sh@366 -- # decimal 2 00:05:44.058 14:13:25 env -- scripts/common.sh@353 -- # local d=2 00:05:44.058 14:13:25 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.058 14:13:25 env -- scripts/common.sh@355 -- # echo 2 00:05:44.058 14:13:25 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.058 14:13:25 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.058 14:13:25 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.058 14:13:25 env -- scripts/common.sh@368 -- # return 0 00:05:44.058 14:13:25 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.058 14:13:25 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:44.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.058 --rc genhtml_branch_coverage=1 00:05:44.058 --rc genhtml_function_coverage=1 00:05:44.058 --rc genhtml_legend=1 00:05:44.058 --rc geninfo_all_blocks=1 00:05:44.058 --rc geninfo_unexecuted_blocks=1 00:05:44.058 00:05:44.058 ' 00:05:44.058 14:13:25 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:44.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.058 --rc genhtml_branch_coverage=1 00:05:44.058 --rc genhtml_function_coverage=1 00:05:44.058 --rc genhtml_legend=1 00:05:44.058 --rc geninfo_all_blocks=1 00:05:44.058 --rc geninfo_unexecuted_blocks=1 00:05:44.058 00:05:44.058 ' 00:05:44.058 14:13:25 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:44.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.058 --rc genhtml_branch_coverage=1 00:05:44.058 --rc genhtml_function_coverage=1 00:05:44.058 --rc genhtml_legend=1 00:05:44.058 --rc geninfo_all_blocks=1 00:05:44.058 --rc geninfo_unexecuted_blocks=1 00:05:44.058 00:05:44.058 ' 00:05:44.058 14:13:25 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:44.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.058 --rc genhtml_branch_coverage=1 00:05:44.058 --rc genhtml_function_coverage=1 00:05:44.058 --rc genhtml_legend=1 00:05:44.058 --rc geninfo_all_blocks=1 00:05:44.058 --rc geninfo_unexecuted_blocks=1 00:05:44.058 00:05:44.058 ' 00:05:44.058 14:13:25 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:44.058 14:13:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.058 14:13:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.058 14:13:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.058 ************************************ 00:05:44.058 START TEST env_memory 00:05:44.058 ************************************ 00:05:44.058 14:13:25 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:44.058 00:05:44.058 00:05:44.058 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.058 http://cunit.sourceforge.net/ 00:05:44.058 00:05:44.058 00:05:44.058 Suite: memory 00:05:44.058 Test: alloc and free memory map ...[2024-11-29 14:13:25.732870] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:44.058 passed 00:05:44.058 Test: mem map translation ...[2024-11-29 14:13:25.771671] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:44.058 [2024-11-29 14:13:25.771778] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:44.058 [2024-11-29 14:13:25.771886] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:44.058 [2024-11-29 14:13:25.771922] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:44.058 passed 00:05:44.058 Test: mem map registration ...[2024-11-29 14:13:25.839962] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:44.058 [2024-11-29 14:13:25.840067] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:44.320 passed 00:05:44.320 Test: mem map adjacent registrations ...passed 00:05:44.320 00:05:44.320 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.320 suites 1 1 n/a 0 0 00:05:44.320 tests 4 4 4 0 0 00:05:44.320 asserts 152 152 152 0 n/a 00:05:44.320 00:05:44.320 Elapsed time = 0.232 seconds 00:05:44.320 00:05:44.320 ************************************ 00:05:44.320 END TEST env_memory 00:05:44.320 ************************************ 00:05:44.320 real 0m0.267s 00:05:44.320 user 0m0.233s 00:05:44.320 sys 0m0.027s 00:05:44.320 14:13:25 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.320 14:13:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:44.320 14:13:25 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:44.320 14:13:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.320 14:13:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.320 14:13:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.320 ************************************ 00:05:44.320 START TEST env_vtophys 00:05:44.320 ************************************ 00:05:44.320 14:13:25 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:44.320 EAL: lib.eal log level changed from notice to debug 00:05:44.320 EAL: Detected lcore 0 as core 0 on socket 0 00:05:44.320 EAL: Detected lcore 1 as core 0 on socket 0 00:05:44.320 EAL: Detected lcore 2 as core 0 on socket 0 00:05:44.320 EAL: Detected lcore 3 as core 0 on socket 0 00:05:44.320 EAL: Detected lcore 4 as core 0 on socket 0 00:05:44.320 EAL: Detected lcore 5 as core 0 on socket 0 00:05:44.320 EAL: Detected lcore 6 as core 0 on socket 0 00:05:44.320 EAL: Detected lcore 7 as core 0 on socket 0 00:05:44.320 EAL: Detected lcore 8 as core 0 on socket 0 00:05:44.320 EAL: Detected lcore 9 as core 0 on socket 0 00:05:44.320 EAL: Maximum logical cores by configuration: 128 00:05:44.320 EAL: Detected CPU lcores: 10 00:05:44.320 EAL: Detected NUMA nodes: 1 00:05:44.320 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:44.320 EAL: Detected shared linkage of DPDK 00:05:44.320 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:44.320 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:44.320 EAL: Registered [vdev] bus. 00:05:44.320 EAL: bus.vdev log level changed from disabled to notice 00:05:44.320 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:44.320 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:44.320 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:44.320 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:44.320 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:44.320 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:44.320 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:44.320 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:44.320 EAL: No shared files mode enabled, IPC will be disabled 00:05:44.320 EAL: No shared files mode enabled, IPC is disabled 00:05:44.320 EAL: Selected IOVA mode 'PA' 00:05:44.320 EAL: Probing VFIO support... 00:05:44.320 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:44.320 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:44.320 EAL: Ask a virtual area of 0x2e000 bytes 00:05:44.320 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:44.320 EAL: Setting up physically contiguous memory... 00:05:44.320 EAL: Setting maximum number of open files to 524288 00:05:44.320 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:44.320 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:44.320 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.320 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:44.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.320 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.320 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:44.320 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:44.320 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.320 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:44.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.320 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.320 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:44.320 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:44.320 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.320 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:44.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.320 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.320 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:44.320 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:44.320 EAL: Ask a virtual area of 0x61000 bytes 00:05:44.320 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:44.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:44.320 EAL: Ask a virtual area of 0x400000000 bytes 00:05:44.320 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:44.320 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:44.320 EAL: Hugepages will be freed exactly as allocated. 00:05:44.320 EAL: No shared files mode enabled, IPC is disabled 00:05:44.320 EAL: No shared files mode enabled, IPC is disabled 00:05:44.583 EAL: TSC frequency is ~2600000 KHz 00:05:44.583 EAL: Main lcore 0 is ready (tid=7face061ca40;cpuset=[0]) 00:05:44.583 EAL: Trying to obtain current memory policy. 00:05:44.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.583 EAL: Restoring previous memory policy: 0 00:05:44.583 EAL: request: mp_malloc_sync 00:05:44.583 EAL: No shared files mode enabled, IPC is disabled 00:05:44.583 EAL: Heap on socket 0 was expanded by 2MB 00:05:44.583 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:44.583 EAL: No shared files mode enabled, IPC is disabled 00:05:44.583 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:44.583 EAL: Mem event callback 'spdk:(nil)' registered 00:05:44.583 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:44.583 00:05:44.583 00:05:44.583 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.583 http://cunit.sourceforge.net/ 00:05:44.583 00:05:44.583 00:05:44.583 Suite: components_suite 00:05:44.844 Test: vtophys_malloc_test ...passed 00:05:44.844 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:44.844 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.844 EAL: Restoring previous memory policy: 4 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was expanded by 4MB 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was shrunk by 4MB 00:05:44.844 EAL: Trying to obtain current memory policy. 00:05:44.844 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.844 EAL: Restoring previous memory policy: 4 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was expanded by 6MB 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was shrunk by 6MB 00:05:44.844 EAL: Trying to obtain current memory policy. 00:05:44.844 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.844 EAL: Restoring previous memory policy: 4 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was expanded by 10MB 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was shrunk by 10MB 00:05:44.844 EAL: Trying to obtain current memory policy. 00:05:44.844 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.844 EAL: Restoring previous memory policy: 4 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was expanded by 18MB 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was shrunk by 18MB 00:05:44.844 EAL: Trying to obtain current memory policy. 00:05:44.844 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.844 EAL: Restoring previous memory policy: 4 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was expanded by 34MB 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was shrunk by 34MB 00:05:44.844 EAL: Trying to obtain current memory policy. 00:05:44.844 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.844 EAL: Restoring previous memory policy: 4 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was expanded by 66MB 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was shrunk by 66MB 00:05:44.844 EAL: Trying to obtain current memory policy. 00:05:44.844 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.844 EAL: Restoring previous memory policy: 4 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was expanded by 130MB 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was shrunk by 130MB 00:05:44.844 EAL: Trying to obtain current memory policy. 00:05:44.844 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.844 EAL: Restoring previous memory policy: 4 00:05:44.844 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.844 EAL: request: mp_malloc_sync 00:05:44.844 EAL: No shared files mode enabled, IPC is disabled 00:05:44.844 EAL: Heap on socket 0 was expanded by 258MB 00:05:45.105 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.105 EAL: request: mp_malloc_sync 00:05:45.105 EAL: No shared files mode enabled, IPC is disabled 00:05:45.105 EAL: Heap on socket 0 was shrunk by 258MB 00:05:45.105 EAL: Trying to obtain current memory policy. 00:05:45.105 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.105 EAL: Restoring previous memory policy: 4 00:05:45.105 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.105 EAL: request: mp_malloc_sync 00:05:45.105 EAL: No shared files mode enabled, IPC is disabled 00:05:45.105 EAL: Heap on socket 0 was expanded by 514MB 00:05:45.105 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.366 EAL: request: mp_malloc_sync 00:05:45.366 EAL: No shared files mode enabled, IPC is disabled 00:05:45.366 EAL: Heap on socket 0 was shrunk by 514MB 00:05:45.366 EAL: Trying to obtain current memory policy. 00:05:45.366 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.627 EAL: Restoring previous memory policy: 4 00:05:45.627 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.627 EAL: request: mp_malloc_sync 00:05:45.627 EAL: No shared files mode enabled, IPC is disabled 00:05:45.627 EAL: Heap on socket 0 was expanded by 1026MB 00:05:45.627 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.888 passed 00:05:45.888 00:05:45.888 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.888 suites 1 1 n/a 0 0 00:05:45.888 tests 2 2 2 0 0 00:05:45.888 asserts 5701 5701 5701 0 n/a 00:05:45.888 00:05:45.888 Elapsed time = 1.223 seconds 00:05:45.888 EAL: request: mp_malloc_sync 00:05:45.888 EAL: No shared files mode enabled, IPC is disabled 00:05:45.888 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:45.888 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.888 EAL: request: mp_malloc_sync 00:05:45.888 EAL: No shared files mode enabled, IPC is disabled 00:05:45.888 EAL: Heap on socket 0 was shrunk by 2MB 00:05:45.888 EAL: No shared files mode enabled, IPC is disabled 00:05:45.888 EAL: No shared files mode enabled, IPC is disabled 00:05:45.888 EAL: No shared files mode enabled, IPC is disabled 00:05:45.888 ************************************ 00:05:45.888 END TEST env_vtophys 00:05:45.888 ************************************ 00:05:45.888 00:05:45.888 real 0m1.457s 00:05:45.888 user 0m0.574s 00:05:45.888 sys 0m0.743s 00:05:45.888 14:13:27 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.888 14:13:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:45.888 14:13:27 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:45.888 14:13:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.888 14:13:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.888 14:13:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.888 ************************************ 00:05:45.888 START TEST env_pci 00:05:45.888 ************************************ 00:05:45.888 14:13:27 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:45.888 00:05:45.888 00:05:45.888 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.888 http://cunit.sourceforge.net/ 00:05:45.888 00:05:45.888 00:05:45.888 Suite: pci 00:05:45.888 Test: pci_hook ...[2024-11-29 14:13:27.526015] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69683 has claimed it 00:05:45.888 passed 00:05:45.888 00:05:45.888 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.888 suites 1 1 n/a 0 0 00:05:45.888 tests 1 1 1 0 0 00:05:45.888 asserts 25 25 25 0 n/a 00:05:45.888 00:05:45.888 Elapsed time = 0.003 seconds 00:05:45.888 EAL: Cannot find device (10000:00:01.0) 00:05:45.888 EAL: Failed to attach device on primary process 00:05:45.888 ************************************ 00:05:45.888 END TEST env_pci 00:05:45.888 ************************************ 00:05:45.888 00:05:45.888 real 0m0.048s 00:05:45.888 user 0m0.019s 00:05:45.888 sys 0m0.028s 00:05:45.888 14:13:27 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.888 14:13:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:45.888 14:13:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:45.888 14:13:27 env -- env/env.sh@15 -- # uname 00:05:45.888 14:13:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:45.888 14:13:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:45.888 14:13:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:45.888 14:13:27 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:45.888 14:13:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.888 14:13:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.888 ************************************ 00:05:45.888 START TEST env_dpdk_post_init 00:05:45.888 ************************************ 00:05:45.888 14:13:27 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:45.888 EAL: Detected CPU lcores: 10 00:05:45.888 EAL: Detected NUMA nodes: 1 00:05:45.888 EAL: Detected shared linkage of DPDK 00:05:45.888 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:45.888 EAL: Selected IOVA mode 'PA' 00:05:46.150 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:46.150 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:46.150 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:46.150 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:46.150 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:46.150 Starting DPDK initialization... 00:05:46.150 Starting SPDK post initialization... 00:05:46.150 SPDK NVMe probe 00:05:46.150 Attaching to 0000:00:10.0 00:05:46.150 Attaching to 0000:00:11.0 00:05:46.150 Attaching to 0000:00:12.0 00:05:46.150 Attaching to 0000:00:13.0 00:05:46.150 Attached to 0000:00:13.0 00:05:46.150 Attached to 0000:00:10.0 00:05:46.150 Attached to 0000:00:11.0 00:05:46.150 Attached to 0000:00:12.0 00:05:46.150 Cleaning up... 00:05:46.150 ************************************ 00:05:46.150 END TEST env_dpdk_post_init 00:05:46.150 ************************************ 00:05:46.150 00:05:46.150 real 0m0.221s 00:05:46.150 user 0m0.053s 00:05:46.150 sys 0m0.072s 00:05:46.150 14:13:27 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.150 14:13:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.150 14:13:27 env -- env/env.sh@26 -- # uname 00:05:46.150 14:13:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:46.150 14:13:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.150 14:13:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.150 14:13:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.150 14:13:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.150 ************************************ 00:05:46.150 START TEST env_mem_callbacks 00:05:46.150 ************************************ 00:05:46.150 14:13:27 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.150 EAL: Detected CPU lcores: 10 00:05:46.150 EAL: Detected NUMA nodes: 1 00:05:46.150 EAL: Detected shared linkage of DPDK 00:05:46.150 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:46.411 EAL: Selected IOVA mode 'PA' 00:05:46.411 00:05:46.411 00:05:46.411 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.411 http://cunit.sourceforge.net/ 00:05:46.411 00:05:46.411 00:05:46.411 Suite: memory 00:05:46.411 Test: test ... 00:05:46.411 register 0x200000200000 2097152 00:05:46.411 malloc 3145728 00:05:46.411 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:46.411 register 0x200000400000 4194304 00:05:46.411 buf 0x200000500000 len 3145728 PASSED 00:05:46.411 malloc 64 00:05:46.411 buf 0x2000004fff40 len 64 PASSED 00:05:46.411 malloc 4194304 00:05:46.411 register 0x200000800000 6291456 00:05:46.411 buf 0x200000a00000 len 4194304 PASSED 00:05:46.411 free 0x200000500000 3145728 00:05:46.411 free 0x2000004fff40 64 00:05:46.411 unregister 0x200000400000 4194304 PASSED 00:05:46.411 free 0x200000a00000 4194304 00:05:46.411 unregister 0x200000800000 6291456 PASSED 00:05:46.411 malloc 8388608 00:05:46.411 register 0x200000400000 10485760 00:05:46.411 buf 0x200000600000 len 8388608 PASSED 00:05:46.411 free 0x200000600000 8388608 00:05:46.411 unregister 0x200000400000 10485760 PASSED 00:05:46.411 passed 00:05:46.411 00:05:46.411 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.411 suites 1 1 n/a 0 0 00:05:46.411 tests 1 1 1 0 0 00:05:46.411 asserts 15 15 15 0 n/a 00:05:46.411 00:05:46.411 Elapsed time = 0.012 seconds 00:05:46.411 00:05:46.411 real 0m0.184s 00:05:46.411 user 0m0.023s 00:05:46.411 sys 0m0.057s 00:05:46.411 ************************************ 00:05:46.411 END TEST env_mem_callbacks 00:05:46.411 ************************************ 00:05:46.411 14:13:28 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.411 14:13:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:46.411 00:05:46.411 real 0m2.600s 00:05:46.411 user 0m1.060s 00:05:46.411 sys 0m1.133s 00:05:46.411 14:13:28 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.411 ************************************ 00:05:46.411 END TEST env 00:05:46.411 ************************************ 00:05:46.411 14:13:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.411 14:13:28 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:46.411 14:13:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.411 14:13:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.411 14:13:28 -- common/autotest_common.sh@10 -- # set +x 00:05:46.411 ************************************ 00:05:46.411 START TEST rpc 00:05:46.411 ************************************ 00:05:46.411 14:13:28 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:46.673 * Looking for test storage... 00:05:46.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.673 14:13:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.673 14:13:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.673 14:13:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.673 14:13:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.673 14:13:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.673 14:13:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.673 14:13:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.673 14:13:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.673 14:13:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.673 14:13:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.673 14:13:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.673 14:13:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:46.673 14:13:28 rpc -- scripts/common.sh@345 -- # : 1 00:05:46.673 14:13:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.673 14:13:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.673 14:13:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:46.673 14:13:28 rpc -- scripts/common.sh@353 -- # local d=1 00:05:46.673 14:13:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.673 14:13:28 rpc -- scripts/common.sh@355 -- # echo 1 00:05:46.673 14:13:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.673 14:13:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:46.673 14:13:28 rpc -- scripts/common.sh@353 -- # local d=2 00:05:46.673 14:13:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.673 14:13:28 rpc -- scripts/common.sh@355 -- # echo 2 00:05:46.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.673 14:13:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.673 14:13:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.673 14:13:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.673 14:13:28 rpc -- scripts/common.sh@368 -- # return 0 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.673 --rc genhtml_branch_coverage=1 00:05:46.673 --rc genhtml_function_coverage=1 00:05:46.673 --rc genhtml_legend=1 00:05:46.673 --rc geninfo_all_blocks=1 00:05:46.673 --rc geninfo_unexecuted_blocks=1 00:05:46.673 00:05:46.673 ' 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.673 --rc genhtml_branch_coverage=1 00:05:46.673 --rc genhtml_function_coverage=1 00:05:46.673 --rc genhtml_legend=1 00:05:46.673 --rc geninfo_all_blocks=1 00:05:46.673 --rc geninfo_unexecuted_blocks=1 00:05:46.673 00:05:46.673 ' 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.673 --rc genhtml_branch_coverage=1 00:05:46.673 --rc genhtml_function_coverage=1 00:05:46.673 --rc genhtml_legend=1 00:05:46.673 --rc geninfo_all_blocks=1 00:05:46.673 --rc geninfo_unexecuted_blocks=1 00:05:46.673 00:05:46.673 ' 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.673 --rc genhtml_branch_coverage=1 00:05:46.673 --rc genhtml_function_coverage=1 00:05:46.673 --rc genhtml_legend=1 00:05:46.673 --rc geninfo_all_blocks=1 00:05:46.673 --rc geninfo_unexecuted_blocks=1 00:05:46.673 00:05:46.673 ' 00:05:46.673 14:13:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69810 00:05:46.673 14:13:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.673 14:13:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69810 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@831 -- # '[' -z 69810 ']' 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.673 14:13:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.673 14:13:28 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:46.673 [2024-11-29 14:13:28.407147] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:46.673 [2024-11-29 14:13:28.407407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69810 ] 00:05:46.949 [2024-11-29 14:13:28.551017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.949 [2024-11-29 14:13:28.600421] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:46.949 [2024-11-29 14:13:28.600741] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69810' to capture a snapshot of events at runtime. 00:05:46.949 [2024-11-29 14:13:28.600834] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:46.949 [2024-11-29 14:13:28.600868] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:46.949 [2024-11-29 14:13:28.600893] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69810 for offline analysis/debug. 00:05:46.949 [2024-11-29 14:13:28.600957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.522 14:13:29 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.522 14:13:29 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:47.522 14:13:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.522 14:13:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.522 14:13:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:47.522 14:13:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:47.522 14:13:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.522 14:13:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.522 14:13:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.522 ************************************ 00:05:47.522 START TEST rpc_integrity 00:05:47.522 ************************************ 00:05:47.522 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:47.522 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.522 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.522 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.522 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.522 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.522 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:47.784 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.784 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.784 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.784 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.784 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.784 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:47.784 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.784 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.784 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.784 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.784 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.784 { 00:05:47.784 "name": "Malloc0", 00:05:47.784 "aliases": [ 00:05:47.784 "d556bfe4-a40c-471a-a561-6d20f881b868" 00:05:47.784 ], 00:05:47.784 "product_name": "Malloc disk", 00:05:47.784 "block_size": 512, 00:05:47.784 "num_blocks": 16384, 00:05:47.784 "uuid": "d556bfe4-a40c-471a-a561-6d20f881b868", 00:05:47.784 "assigned_rate_limits": { 00:05:47.784 "rw_ios_per_sec": 0, 00:05:47.784 "rw_mbytes_per_sec": 0, 00:05:47.784 "r_mbytes_per_sec": 0, 00:05:47.784 "w_mbytes_per_sec": 0 00:05:47.784 }, 00:05:47.784 "claimed": false, 00:05:47.784 "zoned": false, 00:05:47.784 "supported_io_types": { 00:05:47.784 "read": true, 00:05:47.784 "write": true, 00:05:47.784 "unmap": true, 00:05:47.784 "flush": true, 00:05:47.784 "reset": true, 00:05:47.784 "nvme_admin": false, 00:05:47.784 "nvme_io": false, 00:05:47.784 "nvme_io_md": false, 00:05:47.784 "write_zeroes": true, 00:05:47.784 "zcopy": true, 00:05:47.784 "get_zone_info": false, 00:05:47.784 "zone_management": false, 00:05:47.784 "zone_append": false, 00:05:47.784 "compare": false, 00:05:47.784 "compare_and_write": false, 00:05:47.784 "abort": true, 00:05:47.784 "seek_hole": false, 00:05:47.784 "seek_data": false, 00:05:47.784 "copy": true, 00:05:47.784 "nvme_iov_md": false 00:05:47.784 }, 00:05:47.784 "memory_domains": [ 00:05:47.784 { 00:05:47.784 "dma_device_id": "system", 00:05:47.784 "dma_device_type": 1 00:05:47.784 }, 00:05:47.784 { 00:05:47.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.784 "dma_device_type": 2 00:05:47.784 } 00:05:47.784 ], 00:05:47.784 "driver_specific": {} 00:05:47.784 } 00:05:47.784 ]' 00:05:47.784 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:47.784 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.784 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:47.784 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.784 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.784 [2024-11-29 14:13:29.405004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:47.784 [2024-11-29 14:13:29.405078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.784 [2024-11-29 14:13:29.405113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:47.784 [2024-11-29 14:13:29.405124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.785 [2024-11-29 14:13:29.407763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.785 [2024-11-29 14:13:29.407812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.785 Passthru0 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.785 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.785 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.785 { 00:05:47.785 "name": "Malloc0", 00:05:47.785 "aliases": [ 00:05:47.785 "d556bfe4-a40c-471a-a561-6d20f881b868" 00:05:47.785 ], 00:05:47.785 "product_name": "Malloc disk", 00:05:47.785 "block_size": 512, 00:05:47.785 "num_blocks": 16384, 00:05:47.785 "uuid": "d556bfe4-a40c-471a-a561-6d20f881b868", 00:05:47.785 "assigned_rate_limits": { 00:05:47.785 "rw_ios_per_sec": 0, 00:05:47.785 "rw_mbytes_per_sec": 0, 00:05:47.785 "r_mbytes_per_sec": 0, 00:05:47.785 "w_mbytes_per_sec": 0 00:05:47.785 }, 00:05:47.785 "claimed": true, 00:05:47.785 "claim_type": "exclusive_write", 00:05:47.785 "zoned": false, 00:05:47.785 "supported_io_types": { 00:05:47.785 "read": true, 00:05:47.785 "write": true, 00:05:47.785 "unmap": true, 00:05:47.785 "flush": true, 00:05:47.785 "reset": true, 00:05:47.785 "nvme_admin": false, 00:05:47.785 "nvme_io": false, 00:05:47.785 "nvme_io_md": false, 00:05:47.785 "write_zeroes": true, 00:05:47.785 "zcopy": true, 00:05:47.785 "get_zone_info": false, 00:05:47.785 "zone_management": false, 00:05:47.785 "zone_append": false, 00:05:47.785 "compare": false, 00:05:47.785 "compare_and_write": false, 00:05:47.785 "abort": true, 00:05:47.785 "seek_hole": false, 00:05:47.785 "seek_data": false, 00:05:47.785 "copy": true, 00:05:47.785 "nvme_iov_md": false 00:05:47.785 }, 00:05:47.785 "memory_domains": [ 00:05:47.785 { 00:05:47.785 "dma_device_id": "system", 00:05:47.785 "dma_device_type": 1 00:05:47.785 }, 00:05:47.785 { 00:05:47.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.785 "dma_device_type": 2 00:05:47.785 } 00:05:47.785 ], 00:05:47.785 "driver_specific": {} 00:05:47.785 }, 00:05:47.785 { 00:05:47.785 "name": "Passthru0", 00:05:47.785 "aliases": [ 00:05:47.785 "c8224cd2-cc3b-537c-9815-1a1bd588d94e" 00:05:47.785 ], 00:05:47.785 "product_name": "passthru", 00:05:47.785 "block_size": 512, 00:05:47.785 "num_blocks": 16384, 00:05:47.785 "uuid": "c8224cd2-cc3b-537c-9815-1a1bd588d94e", 00:05:47.785 "assigned_rate_limits": { 00:05:47.785 "rw_ios_per_sec": 0, 00:05:47.785 "rw_mbytes_per_sec": 0, 00:05:47.785 "r_mbytes_per_sec": 0, 00:05:47.785 "w_mbytes_per_sec": 0 00:05:47.785 }, 00:05:47.785 "claimed": false, 00:05:47.785 "zoned": false, 00:05:47.785 "supported_io_types": { 00:05:47.785 "read": true, 00:05:47.785 "write": true, 00:05:47.785 "unmap": true, 00:05:47.785 "flush": true, 00:05:47.785 "reset": true, 00:05:47.785 "nvme_admin": false, 00:05:47.785 "nvme_io": false, 00:05:47.785 "nvme_io_md": false, 00:05:47.785 "write_zeroes": true, 00:05:47.785 "zcopy": true, 00:05:47.785 "get_zone_info": false, 00:05:47.785 "zone_management": false, 00:05:47.785 "zone_append": false, 00:05:47.785 "compare": false, 00:05:47.785 "compare_and_write": false, 00:05:47.785 "abort": true, 00:05:47.785 "seek_hole": false, 00:05:47.785 "seek_data": false, 00:05:47.785 "copy": true, 00:05:47.785 "nvme_iov_md": false 00:05:47.785 }, 00:05:47.785 "memory_domains": [ 00:05:47.785 { 00:05:47.785 "dma_device_id": "system", 00:05:47.785 "dma_device_type": 1 00:05:47.785 }, 00:05:47.785 { 00:05:47.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.785 "dma_device_type": 2 00:05:47.785 } 00:05:47.785 ], 00:05:47.785 "driver_specific": { 00:05:47.785 "passthru": { 00:05:47.785 "name": "Passthru0", 00:05:47.785 "base_bdev_name": "Malloc0" 00:05:47.785 } 00:05:47.785 } 00:05:47.785 } 00:05:47.785 ]' 00:05:47.785 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:47.785 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.785 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.785 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.785 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.785 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:47.785 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:47.785 ************************************ 00:05:47.785 END TEST rpc_integrity 00:05:47.785 ************************************ 00:05:47.785 14:13:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:47.785 00:05:47.785 real 0m0.246s 00:05:47.785 user 0m0.135s 00:05:47.785 sys 0m0.037s 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.785 14:13:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.785 14:13:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:47.785 14:13:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.785 14:13:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.785 14:13:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.047 ************************************ 00:05:48.047 START TEST rpc_plugins 00:05:48.047 ************************************ 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:48.047 14:13:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.047 14:13:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:48.047 14:13:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.047 14:13:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:48.047 { 00:05:48.047 "name": "Malloc1", 00:05:48.047 "aliases": [ 00:05:48.047 "414e86e4-4c78-4f91-8456-f6677d62281a" 00:05:48.047 ], 00:05:48.047 "product_name": "Malloc disk", 00:05:48.047 "block_size": 4096, 00:05:48.047 "num_blocks": 256, 00:05:48.047 "uuid": "414e86e4-4c78-4f91-8456-f6677d62281a", 00:05:48.047 "assigned_rate_limits": { 00:05:48.047 "rw_ios_per_sec": 0, 00:05:48.047 "rw_mbytes_per_sec": 0, 00:05:48.047 "r_mbytes_per_sec": 0, 00:05:48.047 "w_mbytes_per_sec": 0 00:05:48.047 }, 00:05:48.047 "claimed": false, 00:05:48.047 "zoned": false, 00:05:48.047 "supported_io_types": { 00:05:48.047 "read": true, 00:05:48.047 "write": true, 00:05:48.047 "unmap": true, 00:05:48.047 "flush": true, 00:05:48.047 "reset": true, 00:05:48.047 "nvme_admin": false, 00:05:48.047 "nvme_io": false, 00:05:48.047 "nvme_io_md": false, 00:05:48.047 "write_zeroes": true, 00:05:48.047 "zcopy": true, 00:05:48.047 "get_zone_info": false, 00:05:48.047 "zone_management": false, 00:05:48.047 "zone_append": false, 00:05:48.047 "compare": false, 00:05:48.047 "compare_and_write": false, 00:05:48.047 "abort": true, 00:05:48.047 "seek_hole": false, 00:05:48.047 "seek_data": false, 00:05:48.047 "copy": true, 00:05:48.047 "nvme_iov_md": false 00:05:48.047 }, 00:05:48.047 "memory_domains": [ 00:05:48.047 { 00:05:48.047 "dma_device_id": "system", 00:05:48.047 "dma_device_type": 1 00:05:48.047 }, 00:05:48.047 { 00:05:48.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.047 "dma_device_type": 2 00:05:48.047 } 00:05:48.047 ], 00:05:48.047 "driver_specific": {} 00:05:48.047 } 00:05:48.047 ]' 00:05:48.047 14:13:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:48.047 14:13:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:48.047 14:13:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.047 14:13:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.047 14:13:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:48.047 14:13:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:48.047 ************************************ 00:05:48.047 END TEST rpc_plugins 00:05:48.047 ************************************ 00:05:48.047 14:13:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:48.047 00:05:48.047 real 0m0.115s 00:05:48.047 user 0m0.065s 00:05:48.047 sys 0m0.015s 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.047 14:13:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.047 14:13:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:48.047 14:13:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.047 14:13:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.047 14:13:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.047 ************************************ 00:05:48.047 START TEST rpc_trace_cmd_test 00:05:48.047 ************************************ 00:05:48.047 14:13:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:48.047 14:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:48.047 14:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:48.047 14:13:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.047 14:13:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.047 14:13:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.047 14:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:48.047 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69810", 00:05:48.047 "tpoint_group_mask": "0x8", 00:05:48.047 "iscsi_conn": { 00:05:48.047 "mask": "0x2", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 }, 00:05:48.047 "scsi": { 00:05:48.047 "mask": "0x4", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 }, 00:05:48.047 "bdev": { 00:05:48.047 "mask": "0x8", 00:05:48.047 "tpoint_mask": "0xffffffffffffffff" 00:05:48.047 }, 00:05:48.047 "nvmf_rdma": { 00:05:48.047 "mask": "0x10", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 }, 00:05:48.047 "nvmf_tcp": { 00:05:48.047 "mask": "0x20", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 }, 00:05:48.047 "ftl": { 00:05:48.047 "mask": "0x40", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 }, 00:05:48.047 "blobfs": { 00:05:48.047 "mask": "0x80", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 }, 00:05:48.047 "dsa": { 00:05:48.047 "mask": "0x200", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 }, 00:05:48.047 "thread": { 00:05:48.047 "mask": "0x400", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 }, 00:05:48.047 "nvme_pcie": { 00:05:48.047 "mask": "0x800", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 }, 00:05:48.047 "iaa": { 00:05:48.047 "mask": "0x1000", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 }, 00:05:48.047 "nvme_tcp": { 00:05:48.047 "mask": "0x2000", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 }, 00:05:48.047 "bdev_nvme": { 00:05:48.047 "mask": "0x4000", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 }, 00:05:48.047 "sock": { 00:05:48.047 "mask": "0x8000", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 }, 00:05:48.047 "blob": { 00:05:48.047 "mask": "0x10000", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 }, 00:05:48.047 "bdev_raid": { 00:05:48.047 "mask": "0x20000", 00:05:48.047 "tpoint_mask": "0x0" 00:05:48.047 } 00:05:48.048 }' 00:05:48.048 14:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:48.048 14:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:48.048 14:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:48.048 14:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:48.048 14:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:48.309 14:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:48.309 14:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:48.309 14:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:48.309 14:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:48.309 ************************************ 00:05:48.309 END TEST rpc_trace_cmd_test 00:05:48.309 ************************************ 00:05:48.309 14:13:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:48.309 00:05:48.309 real 0m0.177s 00:05:48.309 user 0m0.139s 00:05:48.309 sys 0m0.027s 00:05:48.309 14:13:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.309 14:13:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.309 14:13:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:48.309 14:13:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:48.309 14:13:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:48.309 14:13:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.309 14:13:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.309 14:13:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.309 ************************************ 00:05:48.309 START TEST rpc_daemon_integrity 00:05:48.309 ************************************ 00:05:48.309 14:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:48.309 14:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.309 14:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.309 14:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.309 14:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.309 14:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.309 14:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.309 14:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.309 14:13:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.309 14:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.309 14:13:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.309 { 00:05:48.309 "name": "Malloc2", 00:05:48.309 "aliases": [ 00:05:48.309 "78566a99-015e-4e2d-9b59-b28c6387cef2" 00:05:48.309 ], 00:05:48.309 "product_name": "Malloc disk", 00:05:48.309 "block_size": 512, 00:05:48.309 "num_blocks": 16384, 00:05:48.309 "uuid": "78566a99-015e-4e2d-9b59-b28c6387cef2", 00:05:48.309 "assigned_rate_limits": { 00:05:48.309 "rw_ios_per_sec": 0, 00:05:48.309 "rw_mbytes_per_sec": 0, 00:05:48.309 "r_mbytes_per_sec": 0, 00:05:48.309 "w_mbytes_per_sec": 0 00:05:48.309 }, 00:05:48.309 "claimed": false, 00:05:48.309 "zoned": false, 00:05:48.309 "supported_io_types": { 00:05:48.309 "read": true, 00:05:48.309 "write": true, 00:05:48.309 "unmap": true, 00:05:48.309 "flush": true, 00:05:48.309 "reset": true, 00:05:48.309 "nvme_admin": false, 00:05:48.309 "nvme_io": false, 00:05:48.309 "nvme_io_md": false, 00:05:48.309 "write_zeroes": true, 00:05:48.309 "zcopy": true, 00:05:48.309 "get_zone_info": false, 00:05:48.309 "zone_management": false, 00:05:48.309 "zone_append": false, 00:05:48.309 "compare": false, 00:05:48.309 "compare_and_write": false, 00:05:48.309 "abort": true, 00:05:48.309 "seek_hole": false, 00:05:48.309 "seek_data": false, 00:05:48.309 "copy": true, 00:05:48.309 "nvme_iov_md": false 00:05:48.309 }, 00:05:48.309 "memory_domains": [ 00:05:48.309 { 00:05:48.309 "dma_device_id": "system", 00:05:48.309 "dma_device_type": 1 00:05:48.309 }, 00:05:48.309 { 00:05:48.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.309 "dma_device_type": 2 00:05:48.309 } 00:05:48.309 ], 00:05:48.309 "driver_specific": {} 00:05:48.309 } 00:05:48.309 ]' 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.309 [2024-11-29 14:13:30.061270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:48.309 [2024-11-29 14:13:30.061414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.309 [2024-11-29 14:13:30.061441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:48.309 [2024-11-29 14:13:30.061450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.309 [2024-11-29 14:13:30.063601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.309 [2024-11-29 14:13:30.063634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.309 Passthru0 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.309 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.310 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.310 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.310 { 00:05:48.310 "name": "Malloc2", 00:05:48.310 "aliases": [ 00:05:48.310 "78566a99-015e-4e2d-9b59-b28c6387cef2" 00:05:48.310 ], 00:05:48.310 "product_name": "Malloc disk", 00:05:48.310 "block_size": 512, 00:05:48.310 "num_blocks": 16384, 00:05:48.310 "uuid": "78566a99-015e-4e2d-9b59-b28c6387cef2", 00:05:48.310 "assigned_rate_limits": { 00:05:48.310 "rw_ios_per_sec": 0, 00:05:48.310 "rw_mbytes_per_sec": 0, 00:05:48.310 "r_mbytes_per_sec": 0, 00:05:48.310 "w_mbytes_per_sec": 0 00:05:48.310 }, 00:05:48.310 "claimed": true, 00:05:48.310 "claim_type": "exclusive_write", 00:05:48.310 "zoned": false, 00:05:48.310 "supported_io_types": { 00:05:48.310 "read": true, 00:05:48.310 "write": true, 00:05:48.310 "unmap": true, 00:05:48.310 "flush": true, 00:05:48.310 "reset": true, 00:05:48.310 "nvme_admin": false, 00:05:48.310 "nvme_io": false, 00:05:48.310 "nvme_io_md": false, 00:05:48.310 "write_zeroes": true, 00:05:48.310 "zcopy": true, 00:05:48.310 "get_zone_info": false, 00:05:48.310 "zone_management": false, 00:05:48.310 "zone_append": false, 00:05:48.310 "compare": false, 00:05:48.310 "compare_and_write": false, 00:05:48.310 "abort": true, 00:05:48.310 "seek_hole": false, 00:05:48.310 "seek_data": false, 00:05:48.310 "copy": true, 00:05:48.310 "nvme_iov_md": false 00:05:48.310 }, 00:05:48.310 "memory_domains": [ 00:05:48.310 { 00:05:48.310 "dma_device_id": "system", 00:05:48.310 "dma_device_type": 1 00:05:48.310 }, 00:05:48.310 { 00:05:48.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.310 "dma_device_type": 2 00:05:48.310 } 00:05:48.310 ], 00:05:48.310 "driver_specific": {} 00:05:48.310 }, 00:05:48.310 { 00:05:48.310 "name": "Passthru0", 00:05:48.310 "aliases": [ 00:05:48.310 "f7c5e50a-0600-5f93-ac6d-7b71d85726b9" 00:05:48.310 ], 00:05:48.310 "product_name": "passthru", 00:05:48.310 "block_size": 512, 00:05:48.310 "num_blocks": 16384, 00:05:48.310 "uuid": "f7c5e50a-0600-5f93-ac6d-7b71d85726b9", 00:05:48.310 "assigned_rate_limits": { 00:05:48.310 "rw_ios_per_sec": 0, 00:05:48.310 "rw_mbytes_per_sec": 0, 00:05:48.310 "r_mbytes_per_sec": 0, 00:05:48.310 "w_mbytes_per_sec": 0 00:05:48.310 }, 00:05:48.310 "claimed": false, 00:05:48.310 "zoned": false, 00:05:48.310 "supported_io_types": { 00:05:48.310 "read": true, 00:05:48.310 "write": true, 00:05:48.310 "unmap": true, 00:05:48.310 "flush": true, 00:05:48.310 "reset": true, 00:05:48.310 "nvme_admin": false, 00:05:48.310 "nvme_io": false, 00:05:48.310 "nvme_io_md": false, 00:05:48.310 "write_zeroes": true, 00:05:48.310 "zcopy": true, 00:05:48.310 "get_zone_info": false, 00:05:48.310 "zone_management": false, 00:05:48.310 "zone_append": false, 00:05:48.310 "compare": false, 00:05:48.310 "compare_and_write": false, 00:05:48.310 "abort": true, 00:05:48.310 "seek_hole": false, 00:05:48.310 "seek_data": false, 00:05:48.310 "copy": true, 00:05:48.310 "nvme_iov_md": false 00:05:48.310 }, 00:05:48.310 "memory_domains": [ 00:05:48.310 { 00:05:48.310 "dma_device_id": "system", 00:05:48.310 "dma_device_type": 1 00:05:48.310 }, 00:05:48.310 { 00:05:48.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.310 "dma_device_type": 2 00:05:48.310 } 00:05:48.310 ], 00:05:48.310 "driver_specific": { 00:05:48.310 "passthru": { 00:05:48.310 "name": "Passthru0", 00:05:48.310 "base_bdev_name": "Malloc2" 00:05:48.310 } 00:05:48.310 } 00:05:48.310 } 00:05:48.310 ]' 00:05:48.310 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.572 ************************************ 00:05:48.572 END TEST rpc_daemon_integrity 00:05:48.572 ************************************ 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.572 00:05:48.572 real 0m0.223s 00:05:48.572 user 0m0.128s 00:05:48.572 sys 0m0.035s 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.572 14:13:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.572 14:13:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:48.572 14:13:30 rpc -- rpc/rpc.sh@84 -- # killprocess 69810 00:05:48.572 14:13:30 rpc -- common/autotest_common.sh@950 -- # '[' -z 69810 ']' 00:05:48.572 14:13:30 rpc -- common/autotest_common.sh@954 -- # kill -0 69810 00:05:48.572 14:13:30 rpc -- common/autotest_common.sh@955 -- # uname 00:05:48.572 14:13:30 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.572 14:13:30 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69810 00:05:48.572 killing process with pid 69810 00:05:48.572 14:13:30 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.572 14:13:30 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.572 14:13:30 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69810' 00:05:48.572 14:13:30 rpc -- common/autotest_common.sh@969 -- # kill 69810 00:05:48.572 14:13:30 rpc -- common/autotest_common.sh@974 -- # wait 69810 00:05:48.833 00:05:48.833 real 0m2.295s 00:05:48.833 user 0m2.668s 00:05:48.833 sys 0m0.674s 00:05:48.833 14:13:30 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.833 ************************************ 00:05:48.833 END TEST rpc 00:05:48.834 ************************************ 00:05:48.834 14:13:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.834 14:13:30 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:48.834 14:13:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.834 14:13:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.834 14:13:30 -- common/autotest_common.sh@10 -- # set +x 00:05:48.834 ************************************ 00:05:48.834 START TEST skip_rpc 00:05:48.834 ************************************ 00:05:48.834 14:13:30 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:48.834 * Looking for test storage... 00:05:48.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.834 14:13:30 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:48.834 14:13:30 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:48.834 14:13:30 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.096 14:13:30 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.096 14:13:30 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:49.096 14:13:30 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.096 14:13:30 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.096 --rc genhtml_branch_coverage=1 00:05:49.096 --rc genhtml_function_coverage=1 00:05:49.096 --rc genhtml_legend=1 00:05:49.096 --rc geninfo_all_blocks=1 00:05:49.096 --rc geninfo_unexecuted_blocks=1 00:05:49.096 00:05:49.096 ' 00:05:49.096 14:13:30 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.096 --rc genhtml_branch_coverage=1 00:05:49.096 --rc genhtml_function_coverage=1 00:05:49.096 --rc genhtml_legend=1 00:05:49.096 --rc geninfo_all_blocks=1 00:05:49.096 --rc geninfo_unexecuted_blocks=1 00:05:49.096 00:05:49.096 ' 00:05:49.096 14:13:30 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.096 --rc genhtml_branch_coverage=1 00:05:49.096 --rc genhtml_function_coverage=1 00:05:49.096 --rc genhtml_legend=1 00:05:49.096 --rc geninfo_all_blocks=1 00:05:49.096 --rc geninfo_unexecuted_blocks=1 00:05:49.096 00:05:49.096 ' 00:05:49.096 14:13:30 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.096 --rc genhtml_branch_coverage=1 00:05:49.096 --rc genhtml_function_coverage=1 00:05:49.096 --rc genhtml_legend=1 00:05:49.096 --rc geninfo_all_blocks=1 00:05:49.096 --rc geninfo_unexecuted_blocks=1 00:05:49.096 00:05:49.096 ' 00:05:49.096 14:13:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.096 14:13:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:49.096 14:13:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:49.096 14:13:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.096 14:13:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.096 14:13:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.096 ************************************ 00:05:49.096 START TEST skip_rpc 00:05:49.096 ************************************ 00:05:49.096 14:13:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:49.096 14:13:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70006 00:05:49.096 14:13:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.096 14:13:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:49.096 14:13:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:49.096 [2024-11-29 14:13:30.743348] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:49.096 [2024-11-29 14:13:30.743872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70006 ] 00:05:49.357 [2024-11-29 14:13:30.894028] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.357 [2024-11-29 14:13:30.924784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70006 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70006 ']' 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70006 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70006 00:05:54.644 killing process with pid 70006 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70006' 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70006 00:05:54.644 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70006 00:05:54.644 ************************************ 00:05:54.644 END TEST skip_rpc 00:05:54.644 ************************************ 00:05:54.644 00:05:54.644 real 0m5.251s 00:05:54.644 user 0m4.917s 00:05:54.644 sys 0m0.230s 00:05:54.645 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.645 14:13:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.645 14:13:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:54.645 14:13:35 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.645 14:13:35 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.645 14:13:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.645 ************************************ 00:05:54.645 START TEST skip_rpc_with_json 00:05:54.645 ************************************ 00:05:54.645 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:54.645 14:13:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:54.645 14:13:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70088 00:05:54.645 14:13:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.645 14:13:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70088 00:05:54.645 14:13:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.645 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70088 ']' 00:05:54.645 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.645 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.645 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.645 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.645 14:13:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.645 [2024-11-29 14:13:36.050565] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:54.645 [2024-11-29 14:13:36.050881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70088 ] 00:05:54.645 [2024-11-29 14:13:36.192727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.645 [2024-11-29 14:13:36.243187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.212 14:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.212 14:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:55.212 14:13:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:55.212 14:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.212 14:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.212 [2024-11-29 14:13:36.975482] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:55.212 request: 00:05:55.212 { 00:05:55.212 "trtype": "tcp", 00:05:55.212 "method": "nvmf_get_transports", 00:05:55.212 "req_id": 1 00:05:55.212 } 00:05:55.212 Got JSON-RPC error response 00:05:55.212 response: 00:05:55.212 { 00:05:55.212 "code": -19, 00:05:55.212 "message": "No such device" 00:05:55.212 } 00:05:55.212 14:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:55.212 14:13:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:55.212 14:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.212 14:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.212 [2024-11-29 14:13:36.983598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.212 14:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.212 14:13:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:55.212 14:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.212 14:13:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.470 14:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.470 14:13:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.470 { 00:05:55.470 "subsystems": [ 00:05:55.470 { 00:05:55.470 "subsystem": "fsdev", 00:05:55.470 "config": [ 00:05:55.470 { 00:05:55.470 "method": "fsdev_set_opts", 00:05:55.470 "params": { 00:05:55.470 "fsdev_io_pool_size": 65535, 00:05:55.470 "fsdev_io_cache_size": 256 00:05:55.470 } 00:05:55.470 } 00:05:55.470 ] 00:05:55.470 }, 00:05:55.470 { 00:05:55.470 "subsystem": "keyring", 00:05:55.470 "config": [] 00:05:55.470 }, 00:05:55.470 { 00:05:55.470 "subsystem": "iobuf", 00:05:55.470 "config": [ 00:05:55.470 { 00:05:55.470 "method": "iobuf_set_options", 00:05:55.470 "params": { 00:05:55.470 "small_pool_count": 8192, 00:05:55.470 "large_pool_count": 1024, 00:05:55.470 "small_bufsize": 8192, 00:05:55.470 "large_bufsize": 135168 00:05:55.470 } 00:05:55.470 } 00:05:55.470 ] 00:05:55.470 }, 00:05:55.470 { 00:05:55.470 "subsystem": "sock", 00:05:55.470 "config": [ 00:05:55.470 { 00:05:55.470 "method": "sock_set_default_impl", 00:05:55.470 "params": { 00:05:55.470 "impl_name": "posix" 00:05:55.470 } 00:05:55.470 }, 00:05:55.470 { 00:05:55.470 "method": "sock_impl_set_options", 00:05:55.470 "params": { 00:05:55.470 "impl_name": "ssl", 00:05:55.470 "recv_buf_size": 4096, 00:05:55.470 "send_buf_size": 4096, 00:05:55.470 "enable_recv_pipe": true, 00:05:55.470 "enable_quickack": false, 00:05:55.470 "enable_placement_id": 0, 00:05:55.470 "enable_zerocopy_send_server": true, 00:05:55.470 "enable_zerocopy_send_client": false, 00:05:55.470 "zerocopy_threshold": 0, 00:05:55.470 "tls_version": 0, 00:05:55.470 "enable_ktls": false 00:05:55.470 } 00:05:55.470 }, 00:05:55.470 { 00:05:55.470 "method": "sock_impl_set_options", 00:05:55.470 "params": { 00:05:55.470 "impl_name": "posix", 00:05:55.470 "recv_buf_size": 2097152, 00:05:55.470 "send_buf_size": 2097152, 00:05:55.470 "enable_recv_pipe": true, 00:05:55.470 "enable_quickack": false, 00:05:55.470 "enable_placement_id": 0, 00:05:55.470 "enable_zerocopy_send_server": true, 00:05:55.470 "enable_zerocopy_send_client": false, 00:05:55.470 "zerocopy_threshold": 0, 00:05:55.470 "tls_version": 0, 00:05:55.470 "enable_ktls": false 00:05:55.470 } 00:05:55.470 } 00:05:55.470 ] 00:05:55.470 }, 00:05:55.470 { 00:05:55.470 "subsystem": "vmd", 00:05:55.470 "config": [] 00:05:55.470 }, 00:05:55.470 { 00:05:55.470 "subsystem": "accel", 00:05:55.470 "config": [ 00:05:55.470 { 00:05:55.470 "method": "accel_set_options", 00:05:55.470 "params": { 00:05:55.470 "small_cache_size": 128, 00:05:55.470 "large_cache_size": 16, 00:05:55.470 "task_count": 2048, 00:05:55.470 "sequence_count": 2048, 00:05:55.470 "buf_count": 2048 00:05:55.470 } 00:05:55.470 } 00:05:55.470 ] 00:05:55.470 }, 00:05:55.470 { 00:05:55.470 "subsystem": "bdev", 00:05:55.470 "config": [ 00:05:55.470 { 00:05:55.470 "method": "bdev_set_options", 00:05:55.470 "params": { 00:05:55.470 "bdev_io_pool_size": 65535, 00:05:55.470 "bdev_io_cache_size": 256, 00:05:55.470 "bdev_auto_examine": true, 00:05:55.470 "iobuf_small_cache_size": 128, 00:05:55.470 "iobuf_large_cache_size": 16 00:05:55.470 } 00:05:55.470 }, 00:05:55.470 { 00:05:55.470 "method": "bdev_raid_set_options", 00:05:55.470 "params": { 00:05:55.470 "process_window_size_kb": 1024, 00:05:55.471 "process_max_bandwidth_mb_sec": 0 00:05:55.471 } 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "method": "bdev_iscsi_set_options", 00:05:55.471 "params": { 00:05:55.471 "timeout_sec": 30 00:05:55.471 } 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "method": "bdev_nvme_set_options", 00:05:55.471 "params": { 00:05:55.471 "action_on_timeout": "none", 00:05:55.471 "timeout_us": 0, 00:05:55.471 "timeout_admin_us": 0, 00:05:55.471 "keep_alive_timeout_ms": 10000, 00:05:55.471 "arbitration_burst": 0, 00:05:55.471 "low_priority_weight": 0, 00:05:55.471 "medium_priority_weight": 0, 00:05:55.471 "high_priority_weight": 0, 00:05:55.471 "nvme_adminq_poll_period_us": 10000, 00:05:55.471 "nvme_ioq_poll_period_us": 0, 00:05:55.471 "io_queue_requests": 0, 00:05:55.471 "delay_cmd_submit": true, 00:05:55.471 "transport_retry_count": 4, 00:05:55.471 "bdev_retry_count": 3, 00:05:55.471 "transport_ack_timeout": 0, 00:05:55.471 "ctrlr_loss_timeout_sec": 0, 00:05:55.471 "reconnect_delay_sec": 0, 00:05:55.471 "fast_io_fail_timeout_sec": 0, 00:05:55.471 "disable_auto_failback": false, 00:05:55.471 "generate_uuids": false, 00:05:55.471 "transport_tos": 0, 00:05:55.471 "nvme_error_stat": false, 00:05:55.471 "rdma_srq_size": 0, 00:05:55.471 "io_path_stat": false, 00:05:55.471 "allow_accel_sequence": false, 00:05:55.471 "rdma_max_cq_size": 0, 00:05:55.471 "rdma_cm_event_timeout_ms": 0, 00:05:55.471 "dhchap_digests": [ 00:05:55.471 "sha256", 00:05:55.471 "sha384", 00:05:55.471 "sha512" 00:05:55.471 ], 00:05:55.471 "dhchap_dhgroups": [ 00:05:55.471 "null", 00:05:55.471 "ffdhe2048", 00:05:55.471 "ffdhe3072", 00:05:55.471 "ffdhe4096", 00:05:55.471 "ffdhe6144", 00:05:55.471 "ffdhe8192" 00:05:55.471 ] 00:05:55.471 } 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "method": "bdev_nvme_set_hotplug", 00:05:55.471 "params": { 00:05:55.471 "period_us": 100000, 00:05:55.471 "enable": false 00:05:55.471 } 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "method": "bdev_wait_for_examine" 00:05:55.471 } 00:05:55.471 ] 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "subsystem": "scsi", 00:05:55.471 "config": null 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "subsystem": "scheduler", 00:05:55.471 "config": [ 00:05:55.471 { 00:05:55.471 "method": "framework_set_scheduler", 00:05:55.471 "params": { 00:05:55.471 "name": "static" 00:05:55.471 } 00:05:55.471 } 00:05:55.471 ] 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "subsystem": "vhost_scsi", 00:05:55.471 "config": [] 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "subsystem": "vhost_blk", 00:05:55.471 "config": [] 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "subsystem": "ublk", 00:05:55.471 "config": [] 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "subsystem": "nbd", 00:05:55.471 "config": [] 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "subsystem": "nvmf", 00:05:55.471 "config": [ 00:05:55.471 { 00:05:55.471 "method": "nvmf_set_config", 00:05:55.471 "params": { 00:05:55.471 "discovery_filter": "match_any", 00:05:55.471 "admin_cmd_passthru": { 00:05:55.471 "identify_ctrlr": false 00:05:55.471 }, 00:05:55.471 "dhchap_digests": [ 00:05:55.471 "sha256", 00:05:55.471 "sha384", 00:05:55.471 "sha512" 00:05:55.471 ], 00:05:55.471 "dhchap_dhgroups": [ 00:05:55.471 "null", 00:05:55.471 "ffdhe2048", 00:05:55.471 "ffdhe3072", 00:05:55.471 "ffdhe4096", 00:05:55.471 "ffdhe6144", 00:05:55.471 "ffdhe8192" 00:05:55.471 ] 00:05:55.471 } 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "method": "nvmf_set_max_subsystems", 00:05:55.471 "params": { 00:05:55.471 "max_subsystems": 1024 00:05:55.471 } 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "method": "nvmf_set_crdt", 00:05:55.471 "params": { 00:05:55.471 "crdt1": 0, 00:05:55.471 "crdt2": 0, 00:05:55.471 "crdt3": 0 00:05:55.471 } 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "method": "nvmf_create_transport", 00:05:55.471 "params": { 00:05:55.471 "trtype": "TCP", 00:05:55.471 "max_queue_depth": 128, 00:05:55.471 "max_io_qpairs_per_ctrlr": 127, 00:05:55.471 "in_capsule_data_size": 4096, 00:05:55.471 "max_io_size": 131072, 00:05:55.471 "io_unit_size": 131072, 00:05:55.471 "max_aq_depth": 128, 00:05:55.471 "num_shared_buffers": 511, 00:05:55.471 "buf_cache_size": 4294967295, 00:05:55.471 "dif_insert_or_strip": false, 00:05:55.471 "zcopy": false, 00:05:55.471 "c2h_success": true, 00:05:55.471 "sock_priority": 0, 00:05:55.471 "abort_timeout_sec": 1, 00:05:55.471 "ack_timeout": 0, 00:05:55.471 "data_wr_pool_size": 0 00:05:55.471 } 00:05:55.471 } 00:05:55.471 ] 00:05:55.471 }, 00:05:55.471 { 00:05:55.471 "subsystem": "iscsi", 00:05:55.471 "config": [ 00:05:55.471 { 00:05:55.471 "method": "iscsi_set_options", 00:05:55.471 "params": { 00:05:55.471 "node_base": "iqn.2016-06.io.spdk", 00:05:55.471 "max_sessions": 128, 00:05:55.471 "max_connections_per_session": 2, 00:05:55.471 "max_queue_depth": 64, 00:05:55.471 "default_time2wait": 2, 00:05:55.471 "default_time2retain": 20, 00:05:55.471 "first_burst_length": 8192, 00:05:55.471 "immediate_data": true, 00:05:55.471 "allow_duplicated_isid": false, 00:05:55.471 "error_recovery_level": 0, 00:05:55.471 "nop_timeout": 60, 00:05:55.471 "nop_in_interval": 30, 00:05:55.471 "disable_chap": false, 00:05:55.471 "require_chap": false, 00:05:55.471 "mutual_chap": false, 00:05:55.471 "chap_group": 0, 00:05:55.471 "max_large_datain_per_connection": 64, 00:05:55.471 "max_r2t_per_connection": 4, 00:05:55.471 "pdu_pool_size": 36864, 00:05:55.471 "immediate_data_pool_size": 16384, 00:05:55.471 "data_out_pool_size": 2048 00:05:55.471 } 00:05:55.471 } 00:05:55.471 ] 00:05:55.471 } 00:05:55.471 ] 00:05:55.471 } 00:05:55.471 14:13:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:55.471 14:13:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70088 00:05:55.471 14:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70088 ']' 00:05:55.471 14:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70088 00:05:55.471 14:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:55.471 14:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.471 14:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70088 00:05:55.471 killing process with pid 70088 00:05:55.471 14:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.471 14:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.471 14:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70088' 00:05:55.471 14:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70088 00:05:55.471 14:13:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70088 00:05:55.730 14:13:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70117 00:05:55.730 14:13:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:55.730 14:13:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70117 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70117 ']' 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70117 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70117 00:06:01.074 killing process with pid 70117 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70117' 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70117 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70117 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:01.074 00:06:01.074 real 0m6.724s 00:06:01.074 user 0m6.426s 00:06:01.074 sys 0m0.613s 00:06:01.074 ************************************ 00:06:01.074 END TEST skip_rpc_with_json 00:06:01.074 ************************************ 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.074 14:13:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:01.074 14:13:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.074 14:13:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.074 14:13:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.074 ************************************ 00:06:01.074 START TEST skip_rpc_with_delay 00:06:01.074 ************************************ 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.074 [2024-11-29 14:13:42.798006] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:01.074 [2024-11-29 14:13:42.798107] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.074 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.074 00:06:01.074 real 0m0.109s 00:06:01.074 user 0m0.059s 00:06:01.074 sys 0m0.049s 00:06:01.075 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.075 14:13:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:01.075 ************************************ 00:06:01.075 END TEST skip_rpc_with_delay 00:06:01.075 ************************************ 00:06:01.334 14:13:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:01.334 14:13:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:01.334 14:13:42 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:01.334 14:13:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.334 14:13:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.334 14:13:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.334 ************************************ 00:06:01.334 START TEST exit_on_failed_rpc_init 00:06:01.334 ************************************ 00:06:01.334 14:13:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:01.334 14:13:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70228 00:06:01.334 14:13:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70228 00:06:01.334 14:13:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 70228 ']' 00:06:01.334 14:13:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.334 14:13:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.334 14:13:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.334 14:13:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.334 14:13:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.334 14:13:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.334 [2024-11-29 14:13:42.954897] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:01.335 [2024-11-29 14:13:42.955020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70228 ] 00:06:01.335 [2024-11-29 14:13:43.100412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.593 [2024-11-29 14:13:43.129744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:02.163 14:13:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.163 [2024-11-29 14:13:43.860972] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:02.163 [2024-11-29 14:13:43.861087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70246 ] 00:06:02.425 [2024-11-29 14:13:44.007091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.425 [2024-11-29 14:13:44.038047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.425 [2024-11-29 14:13:44.038133] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:02.425 [2024-11-29 14:13:44.038154] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:02.425 [2024-11-29 14:13:44.038165] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70228 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 70228 ']' 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 70228 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70228 00:06:02.425 killing process with pid 70228 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70228' 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 70228 00:06:02.425 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 70228 00:06:02.687 00:06:02.687 real 0m1.496s 00:06:02.687 user 0m1.668s 00:06:02.687 sys 0m0.355s 00:06:02.687 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.687 ************************************ 00:06:02.687 END TEST exit_on_failed_rpc_init 00:06:02.687 ************************************ 00:06:02.687 14:13:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.687 14:13:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.687 00:06:02.687 real 0m13.886s 00:06:02.687 user 0m13.215s 00:06:02.687 sys 0m1.399s 00:06:02.687 14:13:44 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.687 14:13:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.687 ************************************ 00:06:02.687 END TEST skip_rpc 00:06:02.687 ************************************ 00:06:02.687 14:13:44 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:02.687 14:13:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.687 14:13:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.687 14:13:44 -- common/autotest_common.sh@10 -- # set +x 00:06:02.687 ************************************ 00:06:02.687 START TEST rpc_client 00:06:02.687 ************************************ 00:06:02.687 14:13:44 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:02.948 * Looking for test storage... 00:06:02.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:02.948 14:13:44 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:02.948 14:13:44 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:02.948 14:13:44 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:02.948 14:13:44 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:02.948 14:13:44 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.949 14:13:44 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:02.949 14:13:44 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.949 14:13:44 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:02.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.949 --rc genhtml_branch_coverage=1 00:06:02.949 --rc genhtml_function_coverage=1 00:06:02.949 --rc genhtml_legend=1 00:06:02.949 --rc geninfo_all_blocks=1 00:06:02.949 --rc geninfo_unexecuted_blocks=1 00:06:02.949 00:06:02.949 ' 00:06:02.949 14:13:44 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:02.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.949 --rc genhtml_branch_coverage=1 00:06:02.949 --rc genhtml_function_coverage=1 00:06:02.949 --rc genhtml_legend=1 00:06:02.949 --rc geninfo_all_blocks=1 00:06:02.949 --rc geninfo_unexecuted_blocks=1 00:06:02.949 00:06:02.949 ' 00:06:02.949 14:13:44 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:02.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.949 --rc genhtml_branch_coverage=1 00:06:02.949 --rc genhtml_function_coverage=1 00:06:02.949 --rc genhtml_legend=1 00:06:02.949 --rc geninfo_all_blocks=1 00:06:02.949 --rc geninfo_unexecuted_blocks=1 00:06:02.949 00:06:02.949 ' 00:06:02.949 14:13:44 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:02.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.949 --rc genhtml_branch_coverage=1 00:06:02.949 --rc genhtml_function_coverage=1 00:06:02.949 --rc genhtml_legend=1 00:06:02.949 --rc geninfo_all_blocks=1 00:06:02.949 --rc geninfo_unexecuted_blocks=1 00:06:02.949 00:06:02.949 ' 00:06:02.949 14:13:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:02.949 OK 00:06:02.949 14:13:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:02.949 00:06:02.949 real 0m0.188s 00:06:02.949 user 0m0.111s 00:06:02.949 sys 0m0.081s 00:06:02.949 ************************************ 00:06:02.949 END TEST rpc_client 00:06:02.949 ************************************ 00:06:02.949 14:13:44 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.949 14:13:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:02.949 14:13:44 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:02.949 14:13:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.949 14:13:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.949 14:13:44 -- common/autotest_common.sh@10 -- # set +x 00:06:02.949 ************************************ 00:06:02.949 START TEST json_config 00:06:02.949 ************************************ 00:06:02.949 14:13:44 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:02.949 14:13:44 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:02.949 14:13:44 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:02.949 14:13:44 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:03.212 14:13:44 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:03.212 14:13:44 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.212 14:13:44 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.212 14:13:44 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.212 14:13:44 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.212 14:13:44 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.212 14:13:44 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.212 14:13:44 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.212 14:13:44 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.212 14:13:44 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.212 14:13:44 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.212 14:13:44 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.212 14:13:44 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:03.212 14:13:44 json_config -- scripts/common.sh@345 -- # : 1 00:06:03.212 14:13:44 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.212 14:13:44 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.212 14:13:44 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:03.212 14:13:44 json_config -- scripts/common.sh@353 -- # local d=1 00:06:03.212 14:13:44 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.212 14:13:44 json_config -- scripts/common.sh@355 -- # echo 1 00:06:03.212 14:13:44 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.212 14:13:44 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:03.212 14:13:44 json_config -- scripts/common.sh@353 -- # local d=2 00:06:03.212 14:13:44 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.212 14:13:44 json_config -- scripts/common.sh@355 -- # echo 2 00:06:03.212 14:13:44 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.212 14:13:44 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.212 14:13:44 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.212 14:13:44 json_config -- scripts/common.sh@368 -- # return 0 00:06:03.212 14:13:44 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.212 14:13:44 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:03.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.212 --rc genhtml_branch_coverage=1 00:06:03.212 --rc genhtml_function_coverage=1 00:06:03.212 --rc genhtml_legend=1 00:06:03.212 --rc geninfo_all_blocks=1 00:06:03.212 --rc geninfo_unexecuted_blocks=1 00:06:03.212 00:06:03.212 ' 00:06:03.212 14:13:44 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:03.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.212 --rc genhtml_branch_coverage=1 00:06:03.212 --rc genhtml_function_coverage=1 00:06:03.212 --rc genhtml_legend=1 00:06:03.212 --rc geninfo_all_blocks=1 00:06:03.212 --rc geninfo_unexecuted_blocks=1 00:06:03.212 00:06:03.212 ' 00:06:03.212 14:13:44 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:03.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.212 --rc genhtml_branch_coverage=1 00:06:03.212 --rc genhtml_function_coverage=1 00:06:03.212 --rc genhtml_legend=1 00:06:03.212 --rc geninfo_all_blocks=1 00:06:03.212 --rc geninfo_unexecuted_blocks=1 00:06:03.212 00:06:03.212 ' 00:06:03.212 14:13:44 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:03.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.212 --rc genhtml_branch_coverage=1 00:06:03.212 --rc genhtml_function_coverage=1 00:06:03.212 --rc genhtml_legend=1 00:06:03.212 --rc geninfo_all_blocks=1 00:06:03.212 --rc geninfo_unexecuted_blocks=1 00:06:03.212 00:06:03.212 ' 00:06:03.212 14:13:44 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b97862e-3ac3-467d-953d-42cb848625fb 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2b97862e-3ac3-467d-953d-42cb848625fb 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.212 14:13:44 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.212 14:13:44 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.212 14:13:44 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.212 14:13:44 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.212 14:13:44 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.212 14:13:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.213 14:13:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.213 14:13:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.213 14:13:44 json_config -- paths/export.sh@5 -- # export PATH 00:06:03.213 14:13:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.213 14:13:44 json_config -- nvmf/common.sh@51 -- # : 0 00:06:03.213 14:13:44 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.213 14:13:44 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.213 14:13:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.213 14:13:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.213 14:13:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.213 14:13:44 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.213 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.213 14:13:44 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.213 14:13:44 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.213 14:13:44 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.213 14:13:44 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:03.213 14:13:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:03.213 14:13:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:03.213 14:13:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:03.213 14:13:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:03.213 14:13:44 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:03.213 WARNING: No tests are enabled so not running JSON configuration tests 00:06:03.213 14:13:44 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:03.213 00:06:03.213 real 0m0.138s 00:06:03.213 user 0m0.089s 00:06:03.213 sys 0m0.052s 00:06:03.213 14:13:44 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.213 14:13:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.213 ************************************ 00:06:03.213 END TEST json_config 00:06:03.213 ************************************ 00:06:03.213 14:13:44 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:03.213 14:13:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.213 14:13:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.213 14:13:44 -- common/autotest_common.sh@10 -- # set +x 00:06:03.213 ************************************ 00:06:03.213 START TEST json_config_extra_key 00:06:03.213 ************************************ 00:06:03.213 14:13:44 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:03.213 14:13:44 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:03.213 14:13:44 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:03.213 14:13:44 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:03.213 14:13:44 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.213 14:13:44 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:03.213 14:13:44 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.213 14:13:44 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:03.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.213 --rc genhtml_branch_coverage=1 00:06:03.213 --rc genhtml_function_coverage=1 00:06:03.213 --rc genhtml_legend=1 00:06:03.213 --rc geninfo_all_blocks=1 00:06:03.213 --rc geninfo_unexecuted_blocks=1 00:06:03.213 00:06:03.213 ' 00:06:03.213 14:13:44 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:03.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.213 --rc genhtml_branch_coverage=1 00:06:03.213 --rc genhtml_function_coverage=1 00:06:03.213 --rc genhtml_legend=1 00:06:03.213 --rc geninfo_all_blocks=1 00:06:03.213 --rc geninfo_unexecuted_blocks=1 00:06:03.213 00:06:03.213 ' 00:06:03.213 14:13:44 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:03.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.213 --rc genhtml_branch_coverage=1 00:06:03.213 --rc genhtml_function_coverage=1 00:06:03.213 --rc genhtml_legend=1 00:06:03.213 --rc geninfo_all_blocks=1 00:06:03.213 --rc geninfo_unexecuted_blocks=1 00:06:03.213 00:06:03.213 ' 00:06:03.213 14:13:44 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:03.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.213 --rc genhtml_branch_coverage=1 00:06:03.213 --rc genhtml_function_coverage=1 00:06:03.213 --rc genhtml_legend=1 00:06:03.213 --rc geninfo_all_blocks=1 00:06:03.213 --rc geninfo_unexecuted_blocks=1 00:06:03.213 00:06:03.213 ' 00:06:03.213 14:13:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:03.213 14:13:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:03.213 14:13:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.213 14:13:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b97862e-3ac3-467d-953d-42cb848625fb 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2b97862e-3ac3-467d-953d-42cb848625fb 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.214 14:13:44 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.214 14:13:44 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.214 14:13:44 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.214 14:13:44 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.214 14:13:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.214 14:13:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.214 14:13:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.214 14:13:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:03.214 14:13:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.214 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.214 14:13:44 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.214 14:13:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:03.214 14:13:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:03.214 14:13:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:03.214 14:13:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:03.214 14:13:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:03.214 14:13:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:03.214 14:13:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:03.214 14:13:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:03.214 14:13:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:03.214 14:13:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:03.214 INFO: launching applications... 00:06:03.214 14:13:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:03.214 14:13:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:03.214 14:13:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:03.214 14:13:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:03.214 14:13:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.214 14:13:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.214 14:13:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.214 14:13:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.214 14:13:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.214 14:13:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70423 00:06:03.214 14:13:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.214 Waiting for target to run... 00:06:03.214 14:13:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70423 /var/tmp/spdk_tgt.sock 00:06:03.214 14:13:44 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70423 ']' 00:06:03.214 14:13:44 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.214 14:13:44 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.214 14:13:44 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.214 14:13:44 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:03.214 14:13:44 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.214 14:13:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:03.475 [2024-11-29 14:13:45.069412] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:03.475 [2024-11-29 14:13:45.069546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70423 ] 00:06:03.737 [2024-11-29 14:13:45.371301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.737 [2024-11-29 14:13:45.389185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.309 00:06:04.309 INFO: shutting down applications... 00:06:04.309 14:13:45 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.309 14:13:45 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:04.309 14:13:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:04.309 14:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:04.309 14:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:04.309 14:13:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:04.309 14:13:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:04.309 14:13:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70423 ]] 00:06:04.309 14:13:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70423 00:06:04.309 14:13:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:04.309 14:13:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.309 14:13:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70423 00:06:04.309 14:13:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:04.880 14:13:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:04.880 14:13:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.880 14:13:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70423 00:06:04.880 14:13:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:04.880 14:13:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:04.880 14:13:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:04.880 14:13:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:04.880 SPDK target shutdown done 00:06:04.880 Success 00:06:04.880 14:13:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:04.880 ************************************ 00:06:04.880 END TEST json_config_extra_key 00:06:04.880 ************************************ 00:06:04.880 00:06:04.880 real 0m1.578s 00:06:04.880 user 0m1.288s 00:06:04.880 sys 0m0.349s 00:06:04.880 14:13:46 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.880 14:13:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:04.880 14:13:46 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.880 14:13:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.880 14:13:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.880 14:13:46 -- common/autotest_common.sh@10 -- # set +x 00:06:04.880 ************************************ 00:06:04.880 START TEST alias_rpc 00:06:04.880 ************************************ 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.880 * Looking for test storage... 00:06:04.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:04.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.880 14:13:46 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.880 --rc genhtml_branch_coverage=1 00:06:04.880 --rc genhtml_function_coverage=1 00:06:04.880 --rc genhtml_legend=1 00:06:04.880 --rc geninfo_all_blocks=1 00:06:04.880 --rc geninfo_unexecuted_blocks=1 00:06:04.880 00:06:04.880 ' 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.880 --rc genhtml_branch_coverage=1 00:06:04.880 --rc genhtml_function_coverage=1 00:06:04.880 --rc genhtml_legend=1 00:06:04.880 --rc geninfo_all_blocks=1 00:06:04.880 --rc geninfo_unexecuted_blocks=1 00:06:04.880 00:06:04.880 ' 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.880 --rc genhtml_branch_coverage=1 00:06:04.880 --rc genhtml_function_coverage=1 00:06:04.880 --rc genhtml_legend=1 00:06:04.880 --rc geninfo_all_blocks=1 00:06:04.880 --rc geninfo_unexecuted_blocks=1 00:06:04.880 00:06:04.880 ' 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.880 --rc genhtml_branch_coverage=1 00:06:04.880 --rc genhtml_function_coverage=1 00:06:04.880 --rc genhtml_legend=1 00:06:04.880 --rc geninfo_all_blocks=1 00:06:04.880 --rc geninfo_unexecuted_blocks=1 00:06:04.880 00:06:04.880 ' 00:06:04.880 14:13:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:04.880 14:13:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70497 00:06:04.880 14:13:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70497 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70497 ']' 00:06:04.880 14:13:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.880 14:13:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.141 [2024-11-29 14:13:46.695715] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:05.141 [2024-11-29 14:13:46.695850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70497 ] 00:06:05.141 [2024-11-29 14:13:46.838741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.141 [2024-11-29 14:13:46.891413] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.097 14:13:47 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.097 14:13:47 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:06.097 14:13:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:06.097 14:13:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70497 00:06:06.097 14:13:47 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70497 ']' 00:06:06.097 14:13:47 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70497 00:06:06.097 14:13:47 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:06.097 14:13:47 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.097 14:13:47 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70497 00:06:06.097 killing process with pid 70497 00:06:06.097 14:13:47 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.097 14:13:47 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.097 14:13:47 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70497' 00:06:06.097 14:13:47 alias_rpc -- common/autotest_common.sh@969 -- # kill 70497 00:06:06.097 14:13:47 alias_rpc -- common/autotest_common.sh@974 -- # wait 70497 00:06:06.386 ************************************ 00:06:06.386 END TEST alias_rpc 00:06:06.386 ************************************ 00:06:06.386 00:06:06.386 real 0m1.656s 00:06:06.386 user 0m1.759s 00:06:06.386 sys 0m0.447s 00:06:06.386 14:13:48 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.386 14:13:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.647 14:13:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:06.647 14:13:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:06.647 14:13:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.647 14:13:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.647 14:13:48 -- common/autotest_common.sh@10 -- # set +x 00:06:06.647 ************************************ 00:06:06.647 START TEST spdkcli_tcp 00:06:06.647 ************************************ 00:06:06.647 14:13:48 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:06.647 * Looking for test storage... 00:06:06.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:06.647 14:13:48 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.647 14:13:48 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.647 14:13:48 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.647 14:13:48 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.647 14:13:48 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.647 14:13:48 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.647 14:13:48 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.647 14:13:48 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.647 14:13:48 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.647 14:13:48 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.647 14:13:48 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.647 14:13:48 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.647 14:13:48 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.647 14:13:48 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.647 14:13:48 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.647 14:13:48 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.648 14:13:48 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:06.648 14:13:48 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.648 14:13:48 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.648 --rc genhtml_branch_coverage=1 00:06:06.648 --rc genhtml_function_coverage=1 00:06:06.648 --rc genhtml_legend=1 00:06:06.648 --rc geninfo_all_blocks=1 00:06:06.648 --rc geninfo_unexecuted_blocks=1 00:06:06.648 00:06:06.648 ' 00:06:06.648 14:13:48 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.648 --rc genhtml_branch_coverage=1 00:06:06.648 --rc genhtml_function_coverage=1 00:06:06.648 --rc genhtml_legend=1 00:06:06.648 --rc geninfo_all_blocks=1 00:06:06.648 --rc geninfo_unexecuted_blocks=1 00:06:06.648 00:06:06.648 ' 00:06:06.648 14:13:48 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.648 --rc genhtml_branch_coverage=1 00:06:06.648 --rc genhtml_function_coverage=1 00:06:06.648 --rc genhtml_legend=1 00:06:06.648 --rc geninfo_all_blocks=1 00:06:06.648 --rc geninfo_unexecuted_blocks=1 00:06:06.648 00:06:06.648 ' 00:06:06.648 14:13:48 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.648 --rc genhtml_branch_coverage=1 00:06:06.648 --rc genhtml_function_coverage=1 00:06:06.648 --rc genhtml_legend=1 00:06:06.648 --rc geninfo_all_blocks=1 00:06:06.648 --rc geninfo_unexecuted_blocks=1 00:06:06.648 00:06:06.648 ' 00:06:06.648 14:13:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:06.648 14:13:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:06.648 14:13:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:06.648 14:13:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:06.648 14:13:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:06.648 14:13:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:06.648 14:13:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:06.648 14:13:48 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:06.648 14:13:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.648 14:13:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70582 00:06:06.648 14:13:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70582 00:06:06.648 14:13:48 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70582 ']' 00:06:06.648 14:13:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.648 14:13:48 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.648 14:13:48 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.648 14:13:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.648 14:13:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.648 14:13:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:06.648 [2024-11-29 14:13:48.431009] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:06.648 [2024-11-29 14:13:48.431165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70582 ] 00:06:06.908 [2024-11-29 14:13:48.583673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.908 [2024-11-29 14:13:48.633088] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.908 [2024-11-29 14:13:48.633146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.850 14:13:49 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.850 14:13:49 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:07.850 14:13:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70593 00:06:07.850 14:13:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:07.850 14:13:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:07.850 [ 00:06:07.850 "bdev_malloc_delete", 00:06:07.850 "bdev_malloc_create", 00:06:07.850 "bdev_null_resize", 00:06:07.850 "bdev_null_delete", 00:06:07.850 "bdev_null_create", 00:06:07.850 "bdev_nvme_cuse_unregister", 00:06:07.850 "bdev_nvme_cuse_register", 00:06:07.850 "bdev_opal_new_user", 00:06:07.850 "bdev_opal_set_lock_state", 00:06:07.850 "bdev_opal_delete", 00:06:07.850 "bdev_opal_get_info", 00:06:07.850 "bdev_opal_create", 00:06:07.850 "bdev_nvme_opal_revert", 00:06:07.850 "bdev_nvme_opal_init", 00:06:07.850 "bdev_nvme_send_cmd", 00:06:07.850 "bdev_nvme_set_keys", 00:06:07.850 "bdev_nvme_get_path_iostat", 00:06:07.850 "bdev_nvme_get_mdns_discovery_info", 00:06:07.850 "bdev_nvme_stop_mdns_discovery", 00:06:07.850 "bdev_nvme_start_mdns_discovery", 00:06:07.850 "bdev_nvme_set_multipath_policy", 00:06:07.850 "bdev_nvme_set_preferred_path", 00:06:07.851 "bdev_nvme_get_io_paths", 00:06:07.851 "bdev_nvme_remove_error_injection", 00:06:07.851 "bdev_nvme_add_error_injection", 00:06:07.851 "bdev_nvme_get_discovery_info", 00:06:07.851 "bdev_nvme_stop_discovery", 00:06:07.851 "bdev_nvme_start_discovery", 00:06:07.851 "bdev_nvme_get_controller_health_info", 00:06:07.851 "bdev_nvme_disable_controller", 00:06:07.851 "bdev_nvme_enable_controller", 00:06:07.851 "bdev_nvme_reset_controller", 00:06:07.851 "bdev_nvme_get_transport_statistics", 00:06:07.851 "bdev_nvme_apply_firmware", 00:06:07.851 "bdev_nvme_detach_controller", 00:06:07.851 "bdev_nvme_get_controllers", 00:06:07.851 "bdev_nvme_attach_controller", 00:06:07.851 "bdev_nvme_set_hotplug", 00:06:07.851 "bdev_nvme_set_options", 00:06:07.851 "bdev_passthru_delete", 00:06:07.851 "bdev_passthru_create", 00:06:07.851 "bdev_lvol_set_parent_bdev", 00:06:07.851 "bdev_lvol_set_parent", 00:06:07.851 "bdev_lvol_check_shallow_copy", 00:06:07.851 "bdev_lvol_start_shallow_copy", 00:06:07.851 "bdev_lvol_grow_lvstore", 00:06:07.851 "bdev_lvol_get_lvols", 00:06:07.851 "bdev_lvol_get_lvstores", 00:06:07.851 "bdev_lvol_delete", 00:06:07.851 "bdev_lvol_set_read_only", 00:06:07.851 "bdev_lvol_resize", 00:06:07.851 "bdev_lvol_decouple_parent", 00:06:07.851 "bdev_lvol_inflate", 00:06:07.851 "bdev_lvol_rename", 00:06:07.851 "bdev_lvol_clone_bdev", 00:06:07.851 "bdev_lvol_clone", 00:06:07.851 "bdev_lvol_snapshot", 00:06:07.851 "bdev_lvol_create", 00:06:07.851 "bdev_lvol_delete_lvstore", 00:06:07.851 "bdev_lvol_rename_lvstore", 00:06:07.851 "bdev_lvol_create_lvstore", 00:06:07.851 "bdev_raid_set_options", 00:06:07.851 "bdev_raid_remove_base_bdev", 00:06:07.851 "bdev_raid_add_base_bdev", 00:06:07.851 "bdev_raid_delete", 00:06:07.851 "bdev_raid_create", 00:06:07.851 "bdev_raid_get_bdevs", 00:06:07.851 "bdev_error_inject_error", 00:06:07.851 "bdev_error_delete", 00:06:07.851 "bdev_error_create", 00:06:07.851 "bdev_split_delete", 00:06:07.851 "bdev_split_create", 00:06:07.851 "bdev_delay_delete", 00:06:07.851 "bdev_delay_create", 00:06:07.851 "bdev_delay_update_latency", 00:06:07.851 "bdev_zone_block_delete", 00:06:07.851 "bdev_zone_block_create", 00:06:07.851 "blobfs_create", 00:06:07.851 "blobfs_detect", 00:06:07.851 "blobfs_set_cache_size", 00:06:07.851 "bdev_xnvme_delete", 00:06:07.851 "bdev_xnvme_create", 00:06:07.851 "bdev_aio_delete", 00:06:07.851 "bdev_aio_rescan", 00:06:07.851 "bdev_aio_create", 00:06:07.851 "bdev_ftl_set_property", 00:06:07.851 "bdev_ftl_get_properties", 00:06:07.851 "bdev_ftl_get_stats", 00:06:07.851 "bdev_ftl_unmap", 00:06:07.851 "bdev_ftl_unload", 00:06:07.851 "bdev_ftl_delete", 00:06:07.851 "bdev_ftl_load", 00:06:07.851 "bdev_ftl_create", 00:06:07.851 "bdev_virtio_attach_controller", 00:06:07.851 "bdev_virtio_scsi_get_devices", 00:06:07.851 "bdev_virtio_detach_controller", 00:06:07.851 "bdev_virtio_blk_set_hotplug", 00:06:07.851 "bdev_iscsi_delete", 00:06:07.851 "bdev_iscsi_create", 00:06:07.851 "bdev_iscsi_set_options", 00:06:07.851 "accel_error_inject_error", 00:06:07.851 "ioat_scan_accel_module", 00:06:07.851 "dsa_scan_accel_module", 00:06:07.851 "iaa_scan_accel_module", 00:06:07.851 "keyring_file_remove_key", 00:06:07.851 "keyring_file_add_key", 00:06:07.851 "keyring_linux_set_options", 00:06:07.851 "fsdev_aio_delete", 00:06:07.851 "fsdev_aio_create", 00:06:07.851 "iscsi_get_histogram", 00:06:07.851 "iscsi_enable_histogram", 00:06:07.851 "iscsi_set_options", 00:06:07.851 "iscsi_get_auth_groups", 00:06:07.851 "iscsi_auth_group_remove_secret", 00:06:07.851 "iscsi_auth_group_add_secret", 00:06:07.851 "iscsi_delete_auth_group", 00:06:07.851 "iscsi_create_auth_group", 00:06:07.851 "iscsi_set_discovery_auth", 00:06:07.851 "iscsi_get_options", 00:06:07.851 "iscsi_target_node_request_logout", 00:06:07.851 "iscsi_target_node_set_redirect", 00:06:07.851 "iscsi_target_node_set_auth", 00:06:07.851 "iscsi_target_node_add_lun", 00:06:07.851 "iscsi_get_stats", 00:06:07.851 "iscsi_get_connections", 00:06:07.851 "iscsi_portal_group_set_auth", 00:06:07.851 "iscsi_start_portal_group", 00:06:07.851 "iscsi_delete_portal_group", 00:06:07.851 "iscsi_create_portal_group", 00:06:07.851 "iscsi_get_portal_groups", 00:06:07.851 "iscsi_delete_target_node", 00:06:07.851 "iscsi_target_node_remove_pg_ig_maps", 00:06:07.851 "iscsi_target_node_add_pg_ig_maps", 00:06:07.851 "iscsi_create_target_node", 00:06:07.851 "iscsi_get_target_nodes", 00:06:07.851 "iscsi_delete_initiator_group", 00:06:07.851 "iscsi_initiator_group_remove_initiators", 00:06:07.851 "iscsi_initiator_group_add_initiators", 00:06:07.851 "iscsi_create_initiator_group", 00:06:07.851 "iscsi_get_initiator_groups", 00:06:07.851 "nvmf_set_crdt", 00:06:07.851 "nvmf_set_config", 00:06:07.851 "nvmf_set_max_subsystems", 00:06:07.851 "nvmf_stop_mdns_prr", 00:06:07.851 "nvmf_publish_mdns_prr", 00:06:07.851 "nvmf_subsystem_get_listeners", 00:06:07.851 "nvmf_subsystem_get_qpairs", 00:06:07.851 "nvmf_subsystem_get_controllers", 00:06:07.851 "nvmf_get_stats", 00:06:07.851 "nvmf_get_transports", 00:06:07.851 "nvmf_create_transport", 00:06:07.851 "nvmf_get_targets", 00:06:07.851 "nvmf_delete_target", 00:06:07.851 "nvmf_create_target", 00:06:07.851 "nvmf_subsystem_allow_any_host", 00:06:07.851 "nvmf_subsystem_set_keys", 00:06:07.851 "nvmf_subsystem_remove_host", 00:06:07.851 "nvmf_subsystem_add_host", 00:06:07.851 "nvmf_ns_remove_host", 00:06:07.851 "nvmf_ns_add_host", 00:06:07.851 "nvmf_subsystem_remove_ns", 00:06:07.851 "nvmf_subsystem_set_ns_ana_group", 00:06:07.851 "nvmf_subsystem_add_ns", 00:06:07.851 "nvmf_subsystem_listener_set_ana_state", 00:06:07.851 "nvmf_discovery_get_referrals", 00:06:07.851 "nvmf_discovery_remove_referral", 00:06:07.851 "nvmf_discovery_add_referral", 00:06:07.851 "nvmf_subsystem_remove_listener", 00:06:07.851 "nvmf_subsystem_add_listener", 00:06:07.851 "nvmf_delete_subsystem", 00:06:07.851 "nvmf_create_subsystem", 00:06:07.851 "nvmf_get_subsystems", 00:06:07.851 "env_dpdk_get_mem_stats", 00:06:07.851 "nbd_get_disks", 00:06:07.851 "nbd_stop_disk", 00:06:07.851 "nbd_start_disk", 00:06:07.851 "ublk_recover_disk", 00:06:07.851 "ublk_get_disks", 00:06:07.851 "ublk_stop_disk", 00:06:07.851 "ublk_start_disk", 00:06:07.851 "ublk_destroy_target", 00:06:07.851 "ublk_create_target", 00:06:07.851 "virtio_blk_create_transport", 00:06:07.851 "virtio_blk_get_transports", 00:06:07.851 "vhost_controller_set_coalescing", 00:06:07.851 "vhost_get_controllers", 00:06:07.851 "vhost_delete_controller", 00:06:07.851 "vhost_create_blk_controller", 00:06:07.851 "vhost_scsi_controller_remove_target", 00:06:07.851 "vhost_scsi_controller_add_target", 00:06:07.851 "vhost_start_scsi_controller", 00:06:07.851 "vhost_create_scsi_controller", 00:06:07.851 "thread_set_cpumask", 00:06:07.851 "scheduler_set_options", 00:06:07.851 "framework_get_governor", 00:06:07.851 "framework_get_scheduler", 00:06:07.851 "framework_set_scheduler", 00:06:07.851 "framework_get_reactors", 00:06:07.851 "thread_get_io_channels", 00:06:07.851 "thread_get_pollers", 00:06:07.851 "thread_get_stats", 00:06:07.851 "framework_monitor_context_switch", 00:06:07.851 "spdk_kill_instance", 00:06:07.851 "log_enable_timestamps", 00:06:07.851 "log_get_flags", 00:06:07.851 "log_clear_flag", 00:06:07.851 "log_set_flag", 00:06:07.851 "log_get_level", 00:06:07.851 "log_set_level", 00:06:07.851 "log_get_print_level", 00:06:07.851 "log_set_print_level", 00:06:07.851 "framework_enable_cpumask_locks", 00:06:07.851 "framework_disable_cpumask_locks", 00:06:07.851 "framework_wait_init", 00:06:07.851 "framework_start_init", 00:06:07.851 "scsi_get_devices", 00:06:07.851 "bdev_get_histogram", 00:06:07.851 "bdev_enable_histogram", 00:06:07.851 "bdev_set_qos_limit", 00:06:07.851 "bdev_set_qd_sampling_period", 00:06:07.851 "bdev_get_bdevs", 00:06:07.851 "bdev_reset_iostat", 00:06:07.851 "bdev_get_iostat", 00:06:07.851 "bdev_examine", 00:06:07.851 "bdev_wait_for_examine", 00:06:07.851 "bdev_set_options", 00:06:07.851 "accel_get_stats", 00:06:07.851 "accel_set_options", 00:06:07.851 "accel_set_driver", 00:06:07.851 "accel_crypto_key_destroy", 00:06:07.851 "accel_crypto_keys_get", 00:06:07.851 "accel_crypto_key_create", 00:06:07.851 "accel_assign_opc", 00:06:07.851 "accel_get_module_info", 00:06:07.851 "accel_get_opc_assignments", 00:06:07.851 "vmd_rescan", 00:06:07.851 "vmd_remove_device", 00:06:07.851 "vmd_enable", 00:06:07.851 "sock_get_default_impl", 00:06:07.851 "sock_set_default_impl", 00:06:07.851 "sock_impl_set_options", 00:06:07.851 "sock_impl_get_options", 00:06:07.851 "iobuf_get_stats", 00:06:07.851 "iobuf_set_options", 00:06:07.851 "keyring_get_keys", 00:06:07.851 "framework_get_pci_devices", 00:06:07.851 "framework_get_config", 00:06:07.851 "framework_get_subsystems", 00:06:07.851 "fsdev_set_opts", 00:06:07.851 "fsdev_get_opts", 00:06:07.851 "trace_get_info", 00:06:07.851 "trace_get_tpoint_group_mask", 00:06:07.851 "trace_disable_tpoint_group", 00:06:07.851 "trace_enable_tpoint_group", 00:06:07.851 "trace_clear_tpoint_mask", 00:06:07.851 "trace_set_tpoint_mask", 00:06:07.851 "notify_get_notifications", 00:06:07.851 "notify_get_types", 00:06:07.851 "spdk_get_version", 00:06:07.851 "rpc_get_methods" 00:06:07.851 ] 00:06:07.852 14:13:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:07.852 14:13:49 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.852 14:13:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.852 14:13:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:07.852 14:13:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70582 00:06:07.852 14:13:49 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70582 ']' 00:06:07.852 14:13:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70582 00:06:07.852 14:13:49 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:07.852 14:13:49 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.852 14:13:49 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70582 00:06:07.852 14:13:49 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.852 14:13:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.852 killing process with pid 70582 00:06:07.852 14:13:49 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70582' 00:06:07.852 14:13:49 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70582 00:06:07.852 14:13:49 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70582 00:06:08.112 ************************************ 00:06:08.112 END TEST spdkcli_tcp 00:06:08.112 ************************************ 00:06:08.112 00:06:08.112 real 0m1.583s 00:06:08.112 user 0m2.676s 00:06:08.112 sys 0m0.487s 00:06:08.112 14:13:49 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.112 14:13:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.112 14:13:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.112 14:13:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.112 14:13:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.112 14:13:49 -- common/autotest_common.sh@10 -- # set +x 00:06:08.112 ************************************ 00:06:08.112 START TEST dpdk_mem_utility 00:06:08.112 ************************************ 00:06:08.112 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.371 * Looking for test storage... 00:06:08.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:08.371 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:08.371 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:08.371 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:08.372 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.372 14:13:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:08.372 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.372 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:08.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.372 --rc genhtml_branch_coverage=1 00:06:08.372 --rc genhtml_function_coverage=1 00:06:08.372 --rc genhtml_legend=1 00:06:08.372 --rc geninfo_all_blocks=1 00:06:08.372 --rc geninfo_unexecuted_blocks=1 00:06:08.372 00:06:08.372 ' 00:06:08.372 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:08.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.372 --rc genhtml_branch_coverage=1 00:06:08.372 --rc genhtml_function_coverage=1 00:06:08.372 --rc genhtml_legend=1 00:06:08.372 --rc geninfo_all_blocks=1 00:06:08.372 --rc geninfo_unexecuted_blocks=1 00:06:08.372 00:06:08.372 ' 00:06:08.372 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:08.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.372 --rc genhtml_branch_coverage=1 00:06:08.372 --rc genhtml_function_coverage=1 00:06:08.372 --rc genhtml_legend=1 00:06:08.372 --rc geninfo_all_blocks=1 00:06:08.372 --rc geninfo_unexecuted_blocks=1 00:06:08.372 00:06:08.372 ' 00:06:08.372 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:08.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.372 --rc genhtml_branch_coverage=1 00:06:08.372 --rc genhtml_function_coverage=1 00:06:08.372 --rc genhtml_legend=1 00:06:08.372 --rc geninfo_all_blocks=1 00:06:08.372 --rc geninfo_unexecuted_blocks=1 00:06:08.372 00:06:08.372 ' 00:06:08.372 14:13:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:08.372 14:13:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70676 00:06:08.372 14:13:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70676 00:06:08.372 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70676 ']' 00:06:08.372 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.372 14:13:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.372 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.372 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.372 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.372 14:13:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.372 [2024-11-29 14:13:50.111271] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:08.372 [2024-11-29 14:13:50.111893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70676 ] 00:06:08.632 [2024-11-29 14:13:50.283189] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.632 [2024-11-29 14:13:50.328210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.201 14:13:50 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.201 14:13:50 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:09.201 14:13:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:09.201 14:13:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:09.201 14:13:50 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.201 14:13:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.201 { 00:06:09.201 "filename": "/tmp/spdk_mem_dump.txt" 00:06:09.201 } 00:06:09.201 14:13:50 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.201 14:13:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:09.201 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:09.201 1 heaps totaling size 860.000000 MiB 00:06:09.201 size: 860.000000 MiB heap id: 0 00:06:09.201 end heaps---------- 00:06:09.201 9 mempools totaling size 642.649841 MiB 00:06:09.201 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:09.201 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:09.201 size: 92.545471 MiB name: bdev_io_70676 00:06:09.201 size: 51.011292 MiB name: evtpool_70676 00:06:09.201 size: 50.003479 MiB name: msgpool_70676 00:06:09.201 size: 36.509338 MiB name: fsdev_io_70676 00:06:09.201 size: 21.763794 MiB name: PDU_Pool 00:06:09.201 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:09.201 size: 0.026123 MiB name: Session_Pool 00:06:09.201 end mempools------- 00:06:09.201 6 memzones totaling size 4.142822 MiB 00:06:09.201 size: 1.000366 MiB name: RG_ring_0_70676 00:06:09.201 size: 1.000366 MiB name: RG_ring_1_70676 00:06:09.201 size: 1.000366 MiB name: RG_ring_4_70676 00:06:09.201 size: 1.000366 MiB name: RG_ring_5_70676 00:06:09.201 size: 0.125366 MiB name: RG_ring_2_70676 00:06:09.201 size: 0.015991 MiB name: RG_ring_3_70676 00:06:09.201 end memzones------- 00:06:09.201 14:13:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:09.462 heap id: 0 total size: 860.000000 MiB number of busy elements: 318 number of free elements: 16 00:06:09.462 list of free elements. size: 13.934509 MiB 00:06:09.462 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:09.462 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:09.462 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:09.462 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:09.462 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:09.462 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:09.462 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:09.462 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:09.462 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:09.462 element at address: 0x20001d800000 with size: 0.567505 MiB 00:06:09.462 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:09.462 element at address: 0x200003e00000 with size: 0.487183 MiB 00:06:09.462 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:09.462 element at address: 0x200007000000 with size: 0.480286 MiB 00:06:09.462 element at address: 0x20002ac00000 with size: 0.396118 MiB 00:06:09.462 element at address: 0x200003a00000 with size: 0.352295 MiB 00:06:09.462 list of standard malloc elements. size: 199.268799 MiB 00:06:09.462 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:09.462 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:09.462 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:09.462 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:09.462 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:09.462 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:09.462 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:09.462 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:09.462 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:09.462 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:09.462 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a5a300 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a5e7c0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7ea80 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7eb40 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7ec00 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003e7cb80 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003e7cc40 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003e7cd00 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:06:09.462 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000707af40 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:09.463 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:09.463 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20002ac65680 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20002ac65740 with size: 0.000183 MiB 00:06:09.463 element at address: 0x20002ac6c340 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:09.464 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:09.464 list of memzone associated elements. size: 646.796692 MiB 00:06:09.464 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:09.464 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:09.464 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:09.464 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:09.464 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:09.464 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70676_0 00:06:09.464 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:09.464 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70676_0 00:06:09.464 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:09.464 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70676_0 00:06:09.464 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:09.464 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70676_0 00:06:09.464 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:09.464 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:09.464 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:09.464 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:09.464 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:09.464 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70676 00:06:09.464 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:09.464 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70676 00:06:09.464 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:09.464 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70676 00:06:09.464 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:09.464 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:09.464 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:09.464 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:09.464 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:09.464 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:09.464 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:09.464 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:09.464 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:09.464 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70676 00:06:09.464 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:09.464 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70676 00:06:09.464 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:09.464 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70676 00:06:09.464 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:09.464 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70676 00:06:09.464 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:09.464 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70676 00:06:09.464 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:09.464 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70676 00:06:09.464 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:09.464 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:09.464 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:09.464 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:09.464 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:09.464 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:09.464 element at address: 0x200003a5e880 with size: 0.125488 MiB 00:06:09.464 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70676 00:06:09.464 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:09.464 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:09.464 element at address: 0x20002ac65800 with size: 0.023743 MiB 00:06:09.464 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:09.464 element at address: 0x200003a5a5c0 with size: 0.016113 MiB 00:06:09.464 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70676 00:06:09.464 element at address: 0x20002ac6b940 with size: 0.002441 MiB 00:06:09.464 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:09.464 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:09.464 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70676 00:06:09.464 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:09.464 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70676 00:06:09.464 element at address: 0x200003a5a3c0 with size: 0.000305 MiB 00:06:09.464 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70676 00:06:09.464 element at address: 0x20002ac6c400 with size: 0.000305 MiB 00:06:09.464 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:09.464 14:13:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:09.464 14:13:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70676 00:06:09.464 14:13:51 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70676 ']' 00:06:09.464 14:13:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70676 00:06:09.464 14:13:51 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:09.464 14:13:51 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.464 14:13:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70676 00:06:09.464 14:13:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.464 14:13:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.464 14:13:51 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70676' 00:06:09.464 killing process with pid 70676 00:06:09.464 14:13:51 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70676 00:06:09.464 14:13:51 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70676 00:06:09.723 00:06:09.723 real 0m1.593s 00:06:09.723 user 0m1.538s 00:06:09.723 sys 0m0.479s 00:06:09.723 ************************************ 00:06:09.723 END TEST dpdk_mem_utility 00:06:09.723 ************************************ 00:06:09.723 14:13:51 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.723 14:13:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.723 14:13:51 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:09.723 14:13:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.723 14:13:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.723 14:13:51 -- common/autotest_common.sh@10 -- # set +x 00:06:09.983 ************************************ 00:06:09.983 START TEST event 00:06:09.983 ************************************ 00:06:09.983 14:13:51 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:09.983 * Looking for test storage... 00:06:09.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:09.983 14:13:51 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.983 14:13:51 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.983 14:13:51 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:09.983 14:13:51 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:09.983 14:13:51 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.983 14:13:51 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.983 14:13:51 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.983 14:13:51 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.983 14:13:51 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.983 14:13:51 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.983 14:13:51 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.983 14:13:51 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.983 14:13:51 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.983 14:13:51 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.983 14:13:51 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.983 14:13:51 event -- scripts/common.sh@344 -- # case "$op" in 00:06:09.983 14:13:51 event -- scripts/common.sh@345 -- # : 1 00:06:09.983 14:13:51 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.983 14:13:51 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.983 14:13:51 event -- scripts/common.sh@365 -- # decimal 1 00:06:09.983 14:13:51 event -- scripts/common.sh@353 -- # local d=1 00:06:09.983 14:13:51 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.983 14:13:51 event -- scripts/common.sh@355 -- # echo 1 00:06:09.983 14:13:51 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.983 14:13:51 event -- scripts/common.sh@366 -- # decimal 2 00:06:09.983 14:13:51 event -- scripts/common.sh@353 -- # local d=2 00:06:09.983 14:13:51 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.983 14:13:51 event -- scripts/common.sh@355 -- # echo 2 00:06:09.983 14:13:51 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.983 14:13:51 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.983 14:13:51 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.983 14:13:51 event -- scripts/common.sh@368 -- # return 0 00:06:09.983 14:13:51 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.983 14:13:51 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.983 --rc genhtml_branch_coverage=1 00:06:09.983 --rc genhtml_function_coverage=1 00:06:09.983 --rc genhtml_legend=1 00:06:09.983 --rc geninfo_all_blocks=1 00:06:09.983 --rc geninfo_unexecuted_blocks=1 00:06:09.983 00:06:09.983 ' 00:06:09.983 14:13:51 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.983 --rc genhtml_branch_coverage=1 00:06:09.983 --rc genhtml_function_coverage=1 00:06:09.983 --rc genhtml_legend=1 00:06:09.983 --rc geninfo_all_blocks=1 00:06:09.983 --rc geninfo_unexecuted_blocks=1 00:06:09.983 00:06:09.983 ' 00:06:09.983 14:13:51 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.983 --rc genhtml_branch_coverage=1 00:06:09.983 --rc genhtml_function_coverage=1 00:06:09.983 --rc genhtml_legend=1 00:06:09.983 --rc geninfo_all_blocks=1 00:06:09.983 --rc geninfo_unexecuted_blocks=1 00:06:09.983 00:06:09.983 ' 00:06:09.983 14:13:51 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.983 --rc genhtml_branch_coverage=1 00:06:09.983 --rc genhtml_function_coverage=1 00:06:09.983 --rc genhtml_legend=1 00:06:09.983 --rc geninfo_all_blocks=1 00:06:09.983 --rc geninfo_unexecuted_blocks=1 00:06:09.983 00:06:09.983 ' 00:06:09.983 14:13:51 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:09.983 14:13:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:09.983 14:13:51 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:09.983 14:13:51 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:09.983 14:13:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.983 14:13:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.983 ************************************ 00:06:09.983 START TEST event_perf 00:06:09.983 ************************************ 00:06:09.983 14:13:51 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:09.983 Running I/O for 1 seconds...[2024-11-29 14:13:51.686466] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:09.983 [2024-11-29 14:13:51.686684] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70757 ] 00:06:10.242 [2024-11-29 14:13:51.834982] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.242 [2024-11-29 14:13:51.878042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.242 [2024-11-29 14:13:51.878255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.242 Running I/O for 1 seconds...[2024-11-29 14:13:51.878578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.242 [2024-11-29 14:13:51.878646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.178 00:06:11.178 lcore 0: 137629 00:06:11.178 lcore 1: 137631 00:06:11.178 lcore 2: 137631 00:06:11.178 lcore 3: 137629 00:06:11.178 done. 00:06:11.178 00:06:11.178 real 0m1.306s 00:06:11.178 user 0m4.095s 00:06:11.178 sys 0m0.083s 00:06:11.178 14:13:52 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.178 14:13:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.178 ************************************ 00:06:11.178 END TEST event_perf 00:06:11.178 ************************************ 00:06:11.437 14:13:53 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:11.437 14:13:53 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:11.437 14:13:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.437 14:13:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.437 ************************************ 00:06:11.437 START TEST event_reactor 00:06:11.437 ************************************ 00:06:11.437 14:13:53 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:11.437 [2024-11-29 14:13:53.053263] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:11.437 [2024-11-29 14:13:53.053595] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70795 ] 00:06:11.437 [2024-11-29 14:13:53.205991] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.698 [2024-11-29 14:13:53.256523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.640 test_start 00:06:12.640 oneshot 00:06:12.640 tick 100 00:06:12.640 tick 100 00:06:12.640 tick 250 00:06:12.640 tick 100 00:06:12.640 tick 100 00:06:12.640 tick 100 00:06:12.640 tick 250 00:06:12.640 tick 500 00:06:12.640 tick 100 00:06:12.640 tick 100 00:06:12.640 tick 250 00:06:12.640 tick 100 00:06:12.640 tick 100 00:06:12.640 test_end 00:06:12.640 00:06:12.640 real 0m1.323s 00:06:12.640 user 0m1.110s 00:06:12.640 sys 0m0.100s 00:06:12.640 ************************************ 00:06:12.640 END TEST event_reactor 00:06:12.640 ************************************ 00:06:12.640 14:13:54 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.640 14:13:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:12.640 14:13:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.640 14:13:54 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:12.640 14:13:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.640 14:13:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.640 ************************************ 00:06:12.640 START TEST event_reactor_perf 00:06:12.640 ************************************ 00:06:12.640 14:13:54 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.899 [2024-11-29 14:13:54.443712] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:12.899 [2024-11-29 14:13:54.443861] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70827 ] 00:06:12.899 [2024-11-29 14:13:54.594834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.899 [2024-11-29 14:13:54.644809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.286 test_start 00:06:14.286 test_end 00:06:14.286 Performance: 306363 events per second 00:06:14.286 00:06:14.286 real 0m1.318s 00:06:14.286 user 0m1.125s 00:06:14.286 sys 0m0.082s 00:06:14.286 14:13:55 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.286 ************************************ 00:06:14.286 END TEST event_reactor_perf 00:06:14.286 ************************************ 00:06:14.286 14:13:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.286 14:13:55 event -- event/event.sh@49 -- # uname -s 00:06:14.286 14:13:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:14.286 14:13:55 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:14.286 14:13:55 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.286 14:13:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.286 14:13:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.286 ************************************ 00:06:14.286 START TEST event_scheduler 00:06:14.286 ************************************ 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:14.286 * Looking for test storage... 00:06:14.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.286 14:13:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.286 --rc genhtml_branch_coverage=1 00:06:14.286 --rc genhtml_function_coverage=1 00:06:14.286 --rc genhtml_legend=1 00:06:14.286 --rc geninfo_all_blocks=1 00:06:14.286 --rc geninfo_unexecuted_blocks=1 00:06:14.286 00:06:14.286 ' 00:06:14.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.286 --rc genhtml_branch_coverage=1 00:06:14.286 --rc genhtml_function_coverage=1 00:06:14.286 --rc genhtml_legend=1 00:06:14.286 --rc geninfo_all_blocks=1 00:06:14.286 --rc geninfo_unexecuted_blocks=1 00:06:14.286 00:06:14.286 ' 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.286 --rc genhtml_branch_coverage=1 00:06:14.286 --rc genhtml_function_coverage=1 00:06:14.286 --rc genhtml_legend=1 00:06:14.286 --rc geninfo_all_blocks=1 00:06:14.286 --rc geninfo_unexecuted_blocks=1 00:06:14.286 00:06:14.286 ' 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.286 --rc genhtml_branch_coverage=1 00:06:14.286 --rc genhtml_function_coverage=1 00:06:14.286 --rc genhtml_legend=1 00:06:14.286 --rc geninfo_all_blocks=1 00:06:14.286 --rc geninfo_unexecuted_blocks=1 00:06:14.286 00:06:14.286 ' 00:06:14.286 14:13:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:14.286 14:13:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70898 00:06:14.286 14:13:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.286 14:13:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70898 00:06:14.286 14:13:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70898 ']' 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.286 14:13:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.286 [2024-11-29 14:13:56.040375] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:14.286 [2024-11-29 14:13:56.040524] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70898 ] 00:06:14.552 [2024-11-29 14:13:56.187597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.552 [2024-11-29 14:13:56.227242] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.552 [2024-11-29 14:13:56.227774] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.552 [2024-11-29 14:13:56.227910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.552 [2024-11-29 14:13:56.228003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.133 14:13:56 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.133 14:13:56 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:15.133 14:13:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:15.133 14:13:56 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.133 14:13:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.133 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.133 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.133 POWER: Cannot set governor of lcore 0 to performance 00:06:15.133 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.133 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.133 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.133 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.133 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:15.133 POWER: Unable to set Power Management Environment for lcore 0 00:06:15.133 [2024-11-29 14:13:56.857611] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:15.133 [2024-11-29 14:13:56.857638] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:15.133 [2024-11-29 14:13:56.857649] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:15.133 [2024-11-29 14:13:56.857666] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:15.133 [2024-11-29 14:13:56.857674] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:15.133 [2024-11-29 14:13:56.857713] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:15.133 14:13:56 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.133 14:13:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:15.133 14:13:56 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.133 14:13:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.396 [2024-11-29 14:13:56.939328] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:15.396 14:13:56 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.396 14:13:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:15.396 14:13:56 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.396 14:13:56 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.396 14:13:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.396 ************************************ 00:06:15.396 START TEST scheduler_create_thread 00:06:15.396 ************************************ 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.396 2 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.396 3 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.396 4 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.396 5 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.396 6 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.396 14:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.396 7 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.396 8 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.396 9 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.396 10 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.396 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.340 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.340 14:13:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:16.340 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.340 14:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.727 14:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.727 14:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:17.727 14:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:17.727 14:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.727 14:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.662 ************************************ 00:06:18.662 END TEST scheduler_create_thread 00:06:18.662 ************************************ 00:06:18.662 14:14:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.662 00:06:18.662 real 0m3.370s 00:06:18.662 user 0m0.018s 00:06:18.662 sys 0m0.005s 00:06:18.662 14:14:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.662 14:14:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.662 14:14:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:18.662 14:14:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70898 00:06:18.662 14:14:00 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70898 ']' 00:06:18.662 14:14:00 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70898 00:06:18.662 14:14:00 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:18.662 14:14:00 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.662 14:14:00 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70898 00:06:18.662 killing process with pid 70898 00:06:18.662 14:14:00 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:18.662 14:14:00 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:18.662 14:14:00 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70898' 00:06:18.662 14:14:00 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70898 00:06:18.662 14:14:00 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70898 00:06:18.920 [2024-11-29 14:14:00.699622] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:19.178 00:06:19.178 real 0m5.097s 00:06:19.178 user 0m9.993s 00:06:19.178 sys 0m0.368s 00:06:19.178 14:14:00 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.178 ************************************ 00:06:19.178 END TEST event_scheduler 00:06:19.178 ************************************ 00:06:19.178 14:14:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.178 14:14:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:19.178 14:14:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:19.178 14:14:00 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.178 14:14:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.178 14:14:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.178 ************************************ 00:06:19.178 START TEST app_repeat 00:06:19.178 ************************************ 00:06:19.178 14:14:00 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:19.178 14:14:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.178 14:14:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.178 14:14:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:19.178 14:14:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.178 14:14:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:19.178 14:14:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:19.178 14:14:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:19.178 Process app_repeat pid: 71004 00:06:19.178 spdk_app_start Round 0 00:06:19.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.178 14:14:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71004 00:06:19.178 14:14:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.178 14:14:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71004' 00:06:19.178 14:14:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:19.178 14:14:00 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:19.178 14:14:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:19.178 14:14:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71004 /var/tmp/spdk-nbd.sock 00:06:19.178 14:14:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71004 ']' 00:06:19.178 14:14:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.178 14:14:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.178 14:14:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.178 14:14:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.178 14:14:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.442 [2024-11-29 14:14:00.999604] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:19.442 [2024-11-29 14:14:00.999715] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71004 ] 00:06:19.442 [2024-11-29 14:14:01.146170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.442 [2024-11-29 14:14:01.181290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.442 [2024-11-29 14:14:01.181415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.372 14:14:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.372 14:14:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:20.372 14:14:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.372 Malloc0 00:06:20.372 14:14:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.630 Malloc1 00:06:20.630 14:14:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.630 14:14:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.889 /dev/nbd0 00:06:20.889 14:14:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.889 14:14:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.889 14:14:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:20.889 14:14:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:20.889 14:14:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:20.889 14:14:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:20.889 14:14:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:20.889 14:14:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:20.889 14:14:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:20.889 14:14:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:20.889 14:14:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.889 1+0 records in 00:06:20.889 1+0 records out 00:06:20.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038865 s, 10.5 MB/s 00:06:20.889 14:14:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.889 14:14:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:20.889 14:14:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.889 14:14:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:20.889 14:14:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:20.889 14:14:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.889 14:14:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.889 14:14:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.147 /dev/nbd1 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.147 14:14:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:21.147 14:14:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:21.147 14:14:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:21.147 14:14:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:21.147 14:14:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:21.147 14:14:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:21.147 14:14:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:21.147 14:14:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:21.147 14:14:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.147 1+0 records in 00:06:21.147 1+0 records out 00:06:21.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000166965 s, 24.5 MB/s 00:06:21.147 14:14:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.147 14:14:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:21.147 14:14:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.147 14:14:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:21.147 14:14:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.147 { 00:06:21.147 "nbd_device": "/dev/nbd0", 00:06:21.147 "bdev_name": "Malloc0" 00:06:21.147 }, 00:06:21.147 { 00:06:21.147 "nbd_device": "/dev/nbd1", 00:06:21.147 "bdev_name": "Malloc1" 00:06:21.147 } 00:06:21.147 ]' 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.147 { 00:06:21.147 "nbd_device": "/dev/nbd0", 00:06:21.147 "bdev_name": "Malloc0" 00:06:21.147 }, 00:06:21.147 { 00:06:21.147 "nbd_device": "/dev/nbd1", 00:06:21.147 "bdev_name": "Malloc1" 00:06:21.147 } 00:06:21.147 ]' 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.147 /dev/nbd1' 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.147 /dev/nbd1' 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.147 256+0 records in 00:06:21.147 256+0 records out 00:06:21.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135455 s, 77.4 MB/s 00:06:21.147 14:14:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.148 14:14:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.405 256+0 records in 00:06:21.405 256+0 records out 00:06:21.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191806 s, 54.7 MB/s 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.405 256+0 records in 00:06:21.405 256+0 records out 00:06:21.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170509 s, 61.5 MB/s 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.405 14:14:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.405 14:14:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.405 14:14:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.405 14:14:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.406 14:14:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.406 14:14:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.406 14:14:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.406 14:14:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.406 14:14:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.406 14:14:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.406 14:14:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.664 14:14:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.664 14:14:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.664 14:14:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.664 14:14:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.664 14:14:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.664 14:14:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.664 14:14:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.664 14:14:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.664 14:14:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.664 14:14:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.664 14:14:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.922 14:14:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.922 14:14:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.922 14:14:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.922 14:14:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.922 14:14:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.922 14:14:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.922 14:14:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.922 14:14:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.922 14:14:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.922 14:14:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.922 14:14:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.922 14:14:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.922 14:14:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.181 14:14:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.181 [2024-11-29 14:14:03.899503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.181 [2024-11-29 14:14:03.932131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.181 [2024-11-29 14:14:03.932249] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.181 [2024-11-29 14:14:03.964342] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.181 [2024-11-29 14:14:03.964397] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.487 spdk_app_start Round 1 00:06:25.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.487 14:14:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.487 14:14:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:25.487 14:14:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71004 /var/tmp/spdk-nbd.sock 00:06:25.487 14:14:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71004 ']' 00:06:25.487 14:14:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.487 14:14:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.487 14:14:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.487 14:14:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.487 14:14:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.487 14:14:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.487 14:14:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:25.487 14:14:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.487 Malloc0 00:06:25.487 14:14:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.748 Malloc1 00:06:25.748 14:14:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.748 14:14:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.010 /dev/nbd0 00:06:26.010 14:14:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.010 14:14:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.010 14:14:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:26.010 14:14:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:26.010 14:14:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.010 14:14:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.010 14:14:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:26.010 14:14:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:26.010 14:14:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.010 14:14:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.010 14:14:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.010 1+0 records in 00:06:26.010 1+0 records out 00:06:26.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016946 s, 24.2 MB/s 00:06:26.010 14:14:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.010 14:14:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:26.010 14:14:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.010 14:14:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.010 14:14:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:26.010 14:14:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.010 14:14:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.010 14:14:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.271 /dev/nbd1 00:06:26.271 14:14:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.271 14:14:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.271 14:14:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:26.271 14:14:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:26.271 14:14:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.271 14:14:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.271 14:14:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:26.271 14:14:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:26.271 14:14:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.271 14:14:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.271 14:14:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.271 1+0 records in 00:06:26.271 1+0 records out 00:06:26.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182359 s, 22.5 MB/s 00:06:26.271 14:14:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.271 14:14:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:26.271 14:14:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.271 14:14:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.271 14:14:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:26.271 14:14:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.272 14:14:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.272 14:14:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.272 14:14:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.272 14:14:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.533 14:14:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.533 { 00:06:26.533 "nbd_device": "/dev/nbd0", 00:06:26.533 "bdev_name": "Malloc0" 00:06:26.533 }, 00:06:26.533 { 00:06:26.533 "nbd_device": "/dev/nbd1", 00:06:26.533 "bdev_name": "Malloc1" 00:06:26.533 } 00:06:26.533 ]' 00:06:26.533 14:14:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.533 { 00:06:26.533 "nbd_device": "/dev/nbd0", 00:06:26.533 "bdev_name": "Malloc0" 00:06:26.533 }, 00:06:26.533 { 00:06:26.533 "nbd_device": "/dev/nbd1", 00:06:26.533 "bdev_name": "Malloc1" 00:06:26.533 } 00:06:26.533 ]' 00:06:26.533 14:14:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.533 14:14:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.533 /dev/nbd1' 00:06:26.533 14:14:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.533 /dev/nbd1' 00:06:26.533 14:14:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.534 256+0 records in 00:06:26.534 256+0 records out 00:06:26.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00731892 s, 143 MB/s 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.534 256+0 records in 00:06:26.534 256+0 records out 00:06:26.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164741 s, 63.7 MB/s 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.534 256+0 records in 00:06:26.534 256+0 records out 00:06:26.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186116 s, 56.3 MB/s 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.534 14:14:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.794 14:14:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.794 14:14:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.794 14:14:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.794 14:14:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.794 14:14:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.794 14:14:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.794 14:14:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.794 14:14:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.794 14:14:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.794 14:14:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.056 14:14:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.056 14:14:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.056 14:14:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.056 14:14:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.056 14:14:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.056 14:14:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.056 14:14:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.056 14:14:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.056 14:14:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.056 14:14:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.056 14:14:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.056 14:14:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.056 14:14:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.056 14:14:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.317 14:14:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.317 14:14:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.317 14:14:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.317 14:14:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.317 14:14:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.317 14:14:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.317 14:14:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.317 14:14:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.317 14:14:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.317 14:14:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.578 14:14:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.578 [2024-11-29 14:14:09.215595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.578 [2024-11-29 14:14:09.244267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.578 [2024-11-29 14:14:09.244271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.578 [2024-11-29 14:14:09.273379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.578 [2024-11-29 14:14:09.273423] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.866 spdk_app_start Round 2 00:06:30.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.866 14:14:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:30.866 14:14:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:30.866 14:14:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71004 /var/tmp/spdk-nbd.sock 00:06:30.866 14:14:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71004 ']' 00:06:30.866 14:14:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.866 14:14:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.866 14:14:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.866 14:14:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.866 14:14:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.866 14:14:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.866 14:14:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:30.866 14:14:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.866 Malloc0 00:06:30.866 14:14:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.125 Malloc1 00:06:31.125 14:14:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.125 14:14:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.384 /dev/nbd0 00:06:31.384 14:14:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.384 14:14:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.384 14:14:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:31.384 14:14:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:31.384 14:14:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.384 14:14:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.384 14:14:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:31.384 14:14:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:31.384 14:14:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.384 14:14:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.384 14:14:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.384 1+0 records in 00:06:31.384 1+0 records out 00:06:31.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188048 s, 21.8 MB/s 00:06:31.384 14:14:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.384 14:14:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:31.384 14:14:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.384 14:14:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.384 14:14:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:31.384 14:14:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.384 14:14:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.384 14:14:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.643 /dev/nbd1 00:06:31.643 14:14:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.643 14:14:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.643 14:14:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:31.643 14:14:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:31.643 14:14:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.643 14:14:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.643 14:14:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:31.643 14:14:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:31.643 14:14:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.643 14:14:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.643 14:14:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.643 1+0 records in 00:06:31.643 1+0 records out 00:06:31.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232235 s, 17.6 MB/s 00:06:31.643 14:14:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.643 14:14:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:31.643 14:14:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.643 14:14:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.643 14:14:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:31.643 14:14:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.643 14:14:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.643 14:14:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.643 14:14:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.643 14:14:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.643 14:14:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.643 { 00:06:31.643 "nbd_device": "/dev/nbd0", 00:06:31.643 "bdev_name": "Malloc0" 00:06:31.643 }, 00:06:31.643 { 00:06:31.643 "nbd_device": "/dev/nbd1", 00:06:31.643 "bdev_name": "Malloc1" 00:06:31.643 } 00:06:31.643 ]' 00:06:31.643 14:14:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.643 14:14:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.643 { 00:06:31.643 "nbd_device": "/dev/nbd0", 00:06:31.643 "bdev_name": "Malloc0" 00:06:31.643 }, 00:06:31.643 { 00:06:31.643 "nbd_device": "/dev/nbd1", 00:06:31.643 "bdev_name": "Malloc1" 00:06:31.643 } 00:06:31.643 ]' 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.902 /dev/nbd1' 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.902 /dev/nbd1' 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:31.902 256+0 records in 00:06:31.902 256+0 records out 00:06:31.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00599102 s, 175 MB/s 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.902 256+0 records in 00:06:31.902 256+0 records out 00:06:31.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154446 s, 67.9 MB/s 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.902 256+0 records in 00:06:31.902 256+0 records out 00:06:31.902 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159933 s, 65.6 MB/s 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.902 14:14:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.161 14:14:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.420 14:14:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.420 14:14:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.420 14:14:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.420 14:14:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.420 14:14:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.420 14:14:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.420 14:14:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:32.420 14:14:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.420 14:14:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.420 14:14:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.420 14:14:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.420 14:14:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.420 14:14:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.679 14:14:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.679 [2024-11-29 14:14:14.470759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.937 [2024-11-29 14:14:14.498160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.937 [2024-11-29 14:14:14.498172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.937 [2024-11-29 14:14:14.527429] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.937 [2024-11-29 14:14:14.527477] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.219 14:14:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71004 /var/tmp/spdk-nbd.sock 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71004 ']' 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:36.219 14:14:17 event.app_repeat -- event/event.sh@39 -- # killprocess 71004 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71004 ']' 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71004 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71004 00:06:36.219 killing process with pid 71004 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71004' 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71004 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71004 00:06:36.219 spdk_app_start is called in Round 0. 00:06:36.219 Shutdown signal received, stop current app iteration 00:06:36.219 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:36.219 spdk_app_start is called in Round 1. 00:06:36.219 Shutdown signal received, stop current app iteration 00:06:36.219 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:36.219 spdk_app_start is called in Round 2. 00:06:36.219 Shutdown signal received, stop current app iteration 00:06:36.219 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:36.219 spdk_app_start is called in Round 3. 00:06:36.219 Shutdown signal received, stop current app iteration 00:06:36.219 ************************************ 00:06:36.219 END TEST app_repeat 00:06:36.219 ************************************ 00:06:36.219 14:14:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:36.219 14:14:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:36.219 00:06:36.219 real 0m16.783s 00:06:36.219 user 0m37.429s 00:06:36.219 sys 0m2.020s 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.219 14:14:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.219 14:14:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:36.219 14:14:17 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:36.219 14:14:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.219 14:14:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.219 14:14:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.219 ************************************ 00:06:36.219 START TEST cpu_locks 00:06:36.219 ************************************ 00:06:36.219 14:14:17 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:36.219 * Looking for test storage... 00:06:36.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:36.219 14:14:17 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:36.219 14:14:17 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:36.219 14:14:17 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:36.219 14:14:17 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:36.219 14:14:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.220 14:14:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:36.220 14:14:17 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.220 14:14:17 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:36.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.220 --rc genhtml_branch_coverage=1 00:06:36.220 --rc genhtml_function_coverage=1 00:06:36.220 --rc genhtml_legend=1 00:06:36.220 --rc geninfo_all_blocks=1 00:06:36.220 --rc geninfo_unexecuted_blocks=1 00:06:36.220 00:06:36.220 ' 00:06:36.220 14:14:17 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:36.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.220 --rc genhtml_branch_coverage=1 00:06:36.220 --rc genhtml_function_coverage=1 00:06:36.220 --rc genhtml_legend=1 00:06:36.220 --rc geninfo_all_blocks=1 00:06:36.220 --rc geninfo_unexecuted_blocks=1 00:06:36.220 00:06:36.220 ' 00:06:36.220 14:14:17 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:36.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.220 --rc genhtml_branch_coverage=1 00:06:36.220 --rc genhtml_function_coverage=1 00:06:36.220 --rc genhtml_legend=1 00:06:36.220 --rc geninfo_all_blocks=1 00:06:36.220 --rc geninfo_unexecuted_blocks=1 00:06:36.220 00:06:36.220 ' 00:06:36.220 14:14:17 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:36.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.220 --rc genhtml_branch_coverage=1 00:06:36.220 --rc genhtml_function_coverage=1 00:06:36.220 --rc genhtml_legend=1 00:06:36.220 --rc geninfo_all_blocks=1 00:06:36.220 --rc geninfo_unexecuted_blocks=1 00:06:36.220 00:06:36.220 ' 00:06:36.220 14:14:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:36.220 14:14:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:36.220 14:14:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:36.220 14:14:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:36.220 14:14:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.220 14:14:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.220 14:14:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.220 ************************************ 00:06:36.220 START TEST default_locks 00:06:36.220 ************************************ 00:06:36.220 14:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:36.220 14:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71423 00:06:36.220 14:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71423 00:06:36.220 14:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71423 ']' 00:06:36.220 14:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.220 14:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.220 14:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.220 14:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.220 14:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.220 14:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.220 [2024-11-29 14:14:17.999920] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:36.220 [2024-11-29 14:14:18.000044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71423 ] 00:06:36.478 [2024-11-29 14:14:18.136880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.479 [2024-11-29 14:14:18.169813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.045 14:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.045 14:14:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:37.045 14:14:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71423 00:06:37.045 14:14:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71423 00:06:37.045 14:14:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.303 14:14:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71423 00:06:37.303 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71423 ']' 00:06:37.303 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71423 00:06:37.303 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:37.303 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.303 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71423 00:06:37.303 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.303 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.303 killing process with pid 71423 00:06:37.303 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71423' 00:06:37.303 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71423 00:06:37.303 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71423 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71423 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71423 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71423 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71423 ']' 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.561 ERROR: process (pid: 71423) is no longer running 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.561 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71423) - No such process 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.561 00:06:37.561 real 0m1.380s 00:06:37.561 user 0m1.393s 00:06:37.561 sys 0m0.417s 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.561 ************************************ 00:06:37.561 END TEST default_locks 00:06:37.561 ************************************ 00:06:37.561 14:14:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.561 14:14:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:37.561 14:14:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.561 14:14:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.561 14:14:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.561 ************************************ 00:06:37.561 START TEST default_locks_via_rpc 00:06:37.561 ************************************ 00:06:37.561 14:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:37.561 14:14:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71471 00:06:37.561 14:14:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71471 00:06:37.561 14:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71471 ']' 00:06:37.561 14:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.561 14:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.561 14:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.561 14:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.561 14:14:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.561 14:14:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.819 [2024-11-29 14:14:19.420810] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:37.819 [2024-11-29 14:14:19.420930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71471 ] 00:06:37.819 [2024-11-29 14:14:19.568383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.819 [2024-11-29 14:14:19.600439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71471 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71471 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71471 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71471 ']' 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71471 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71471 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.751 killing process with pid 71471 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71471' 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71471 00:06:38.751 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71471 00:06:39.009 00:06:39.009 real 0m1.374s 00:06:39.009 user 0m1.417s 00:06:39.009 sys 0m0.415s 00:06:39.009 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.009 ************************************ 00:06:39.009 END TEST default_locks_via_rpc 00:06:39.009 ************************************ 00:06:39.009 14:14:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.009 14:14:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:39.009 14:14:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.009 14:14:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.009 14:14:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.009 ************************************ 00:06:39.009 START TEST non_locking_app_on_locked_coremask 00:06:39.009 ************************************ 00:06:39.009 14:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:39.009 14:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71512 00:06:39.009 14:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71512 /var/tmp/spdk.sock 00:06:39.009 14:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71512 ']' 00:06:39.009 14:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.009 14:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.009 14:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.009 14:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.009 14:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.009 14:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.268 [2024-11-29 14:14:20.851660] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:39.268 [2024-11-29 14:14:20.851783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71512 ] 00:06:39.268 [2024-11-29 14:14:20.998169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.268 [2024-11-29 14:14:21.032979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.200 14:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.200 14:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:40.200 14:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71528 00:06:40.200 14:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71528 /var/tmp/spdk2.sock 00:06:40.200 14:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71528 ']' 00:06:40.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.200 14:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.200 14:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:40.200 14:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.200 14:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.200 14:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.200 14:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.200 [2024-11-29 14:14:21.750266] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:40.200 [2024-11-29 14:14:21.750377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71528 ] 00:06:40.200 [2024-11-29 14:14:21.904430] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.200 [2024-11-29 14:14:21.904485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.200 [2024-11-29 14:14:21.970545] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71512 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71512 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71512 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71512 ']' 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71512 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71512 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.131 killing process with pid 71512 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71512' 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71512 00:06:41.131 14:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71512 00:06:41.696 14:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71528 00:06:41.696 14:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71528 ']' 00:06:41.696 14:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71528 00:06:41.696 14:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:41.696 14:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.696 14:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71528 00:06:41.696 14:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.696 14:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.696 14:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71528' 00:06:41.696 killing process with pid 71528 00:06:41.696 14:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71528 00:06:41.696 14:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71528 00:06:41.953 00:06:41.953 real 0m2.951s 00:06:41.953 user 0m3.211s 00:06:41.953 sys 0m0.808s 00:06:41.953 14:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.953 ************************************ 00:06:41.953 END TEST non_locking_app_on_locked_coremask 00:06:41.953 ************************************ 00:06:41.953 14:14:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.211 14:14:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:42.211 14:14:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.211 14:14:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.211 14:14:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.211 ************************************ 00:06:42.211 START TEST locking_app_on_unlocked_coremask 00:06:42.211 ************************************ 00:06:42.211 14:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:42.211 14:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71586 00:06:42.211 14:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71586 /var/tmp/spdk.sock 00:06:42.211 14:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71586 ']' 00:06:42.211 14:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.211 14:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.211 14:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.211 14:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.211 14:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.211 14:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:42.211 [2024-11-29 14:14:23.860869] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:42.211 [2024-11-29 14:14:23.860995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71586 ] 00:06:42.468 [2024-11-29 14:14:24.010844] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.468 [2024-11-29 14:14:24.010889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.468 [2024-11-29 14:14:24.044711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.033 14:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.033 14:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:43.033 14:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.033 14:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71602 00:06:43.033 14:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71602 /var/tmp/spdk2.sock 00:06:43.034 14:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71602 ']' 00:06:43.034 14:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.034 14:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.034 14:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.034 14:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.034 14:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.034 [2024-11-29 14:14:24.757370] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:43.034 [2024-11-29 14:14:24.757486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71602 ] 00:06:43.291 [2024-11-29 14:14:24.912508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.291 [2024-11-29 14:14:24.977801] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.856 14:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.856 14:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:43.856 14:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71602 00:06:43.856 14:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71602 00:06:43.856 14:14:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.420 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71586 00:06:44.420 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71586 ']' 00:06:44.420 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71586 00:06:44.420 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:44.420 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.420 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71586 00:06:44.420 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.420 killing process with pid 71586 00:06:44.420 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.420 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71586' 00:06:44.420 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71586 00:06:44.420 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71586 00:06:44.986 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71602 00:06:44.986 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71602 ']' 00:06:44.986 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71602 00:06:44.986 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:44.986 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.986 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71602 00:06:44.986 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.986 killing process with pid 71602 00:06:44.986 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.986 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71602' 00:06:44.986 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71602 00:06:44.986 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71602 00:06:45.244 00:06:45.244 real 0m3.120s 00:06:45.244 user 0m3.385s 00:06:45.244 sys 0m0.875s 00:06:45.244 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.244 ************************************ 00:06:45.244 END TEST locking_app_on_unlocked_coremask 00:06:45.244 ************************************ 00:06:45.244 14:14:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.244 14:14:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:45.244 14:14:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.244 14:14:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.244 14:14:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.244 ************************************ 00:06:45.244 START TEST locking_app_on_locked_coremask 00:06:45.244 ************************************ 00:06:45.244 14:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:45.244 14:14:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71660 00:06:45.244 14:14:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71660 /var/tmp/spdk.sock 00:06:45.244 14:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71660 ']' 00:06:45.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.244 14:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.244 14:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.244 14:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.244 14:14:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.244 14:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.244 14:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.502 [2024-11-29 14:14:27.042665] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:45.502 [2024-11-29 14:14:27.042779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71660 ] 00:06:45.502 [2024-11-29 14:14:27.188925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.502 [2024-11-29 14:14:27.221456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71676 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71676 /var/tmp/spdk2.sock 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71676 /var/tmp/spdk2.sock 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71676 /var/tmp/spdk2.sock 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71676 ']' 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.491 14:14:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.491 [2024-11-29 14:14:27.955155] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:46.491 [2024-11-29 14:14:27.955291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71676 ] 00:06:46.491 [2024-11-29 14:14:28.112227] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71660 has claimed it. 00:06:46.491 [2024-11-29 14:14:28.112302] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.058 ERROR: process (pid: 71676) is no longer running 00:06:47.058 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71676) - No such process 00:06:47.058 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.058 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:47.058 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:47.058 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.058 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:47.058 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.058 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71660 00:06:47.058 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.058 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71660 00:06:47.058 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71660 00:06:47.058 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71660 ']' 00:06:47.058 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71660 00:06:47.316 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:47.316 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.316 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71660 00:06:47.316 killing process with pid 71660 00:06:47.316 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.316 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.316 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71660' 00:06:47.316 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71660 00:06:47.316 14:14:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71660 00:06:47.574 00:06:47.574 real 0m2.178s 00:06:47.574 user 0m2.423s 00:06:47.574 sys 0m0.559s 00:06:47.574 14:14:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.574 14:14:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.574 ************************************ 00:06:47.574 END TEST locking_app_on_locked_coremask 00:06:47.574 ************************************ 00:06:47.574 14:14:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:47.574 14:14:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.574 14:14:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.574 14:14:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.574 ************************************ 00:06:47.574 START TEST locking_overlapped_coremask 00:06:47.574 ************************************ 00:06:47.574 14:14:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:47.574 14:14:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71728 00:06:47.574 14:14:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71728 /var/tmp/spdk.sock 00:06:47.574 14:14:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:47.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.574 14:14:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71728 ']' 00:06:47.574 14:14:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.574 14:14:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.574 14:14:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.574 14:14:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.574 14:14:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.574 [2024-11-29 14:14:29.275484] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:47.574 [2024-11-29 14:14:29.275613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71728 ] 00:06:47.833 [2024-11-29 14:14:29.424057] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.833 [2024-11-29 14:14:29.458567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.833 [2024-11-29 14:14:29.458762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.833 [2024-11-29 14:14:29.458799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71736 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71736 /var/tmp/spdk2.sock 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71736 /var/tmp/spdk2.sock 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:48.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71736 /var/tmp/spdk2.sock 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71736 ']' 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.399 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.399 [2024-11-29 14:14:30.172004] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:48.400 [2024-11-29 14:14:30.172129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71736 ] 00:06:48.658 [2024-11-29 14:14:30.325512] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71728 has claimed it. 00:06:48.658 [2024-11-29 14:14:30.325573] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.225 ERROR: process (pid: 71736) is no longer running 00:06:49.225 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71736) - No such process 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71728 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71728 ']' 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71728 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71728 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71728' 00:06:49.225 killing process with pid 71728 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71728 00:06:49.225 14:14:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71728 00:06:49.486 00:06:49.486 real 0m1.907s 00:06:49.486 user 0m5.272s 00:06:49.486 sys 0m0.379s 00:06:49.486 14:14:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.486 14:14:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.486 ************************************ 00:06:49.486 END TEST locking_overlapped_coremask 00:06:49.486 ************************************ 00:06:49.486 14:14:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:49.486 14:14:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.486 14:14:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.486 14:14:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.486 ************************************ 00:06:49.486 START TEST locking_overlapped_coremask_via_rpc 00:06:49.486 ************************************ 00:06:49.486 14:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:49.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.486 14:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71778 00:06:49.486 14:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71778 /var/tmp/spdk.sock 00:06:49.486 14:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71778 ']' 00:06:49.486 14:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.486 14:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.486 14:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.486 14:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.486 14:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.486 14:14:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:49.486 [2024-11-29 14:14:31.256585] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:49.486 [2024-11-29 14:14:31.256697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71778 ] 00:06:49.746 [2024-11-29 14:14:31.404266] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.746 [2024-11-29 14:14:31.404315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.746 [2024-11-29 14:14:31.434992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.746 [2024-11-29 14:14:31.435224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.746 [2024-11-29 14:14:31.435302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.319 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.319 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:50.319 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71796 00:06:50.319 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71796 /var/tmp/spdk2.sock 00:06:50.319 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71796 ']' 00:06:50.319 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.319 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:50.319 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.319 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.319 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.319 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.581 [2024-11-29 14:14:32.160398] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:50.581 [2024-11-29 14:14:32.160718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71796 ] 00:06:50.581 [2024-11-29 14:14:32.314397] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.581 [2024-11-29 14:14:32.314449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.843 [2024-11-29 14:14:32.380199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.843 [2024-11-29 14:14:32.383585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.843 [2024-11-29 14:14:32.383654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:51.415 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.415 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:51.415 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:51.415 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.415 14:14:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.415 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.415 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.415 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:51.415 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.416 [2024-11-29 14:14:33.013635] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71778 has claimed it. 00:06:51.416 request: 00:06:51.416 { 00:06:51.416 "method": "framework_enable_cpumask_locks", 00:06:51.416 "req_id": 1 00:06:51.416 } 00:06:51.416 Got JSON-RPC error response 00:06:51.416 response: 00:06:51.416 { 00:06:51.416 "code": -32603, 00:06:51.416 "message": "Failed to claim CPU core: 2" 00:06:51.416 } 00:06:51.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71778 /var/tmp/spdk.sock 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71778 ']' 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.416 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71796 /var/tmp/spdk2.sock 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71796 ']' 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.675 ************************************ 00:06:51.675 END TEST locking_overlapped_coremask_via_rpc 00:06:51.675 ************************************ 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.675 00:06:51.675 real 0m2.265s 00:06:51.675 user 0m1.056s 00:06:51.675 sys 0m0.141s 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.675 14:14:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.932 14:14:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:51.933 14:14:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71778 ]] 00:06:51.933 14:14:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71778 00:06:51.933 14:14:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71778 ']' 00:06:51.933 14:14:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71778 00:06:51.933 14:14:33 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:51.933 14:14:33 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.933 14:14:33 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71778 00:06:51.933 killing process with pid 71778 00:06:51.933 14:14:33 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.933 14:14:33 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.933 14:14:33 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71778' 00:06:51.933 14:14:33 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71778 00:06:51.933 14:14:33 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71778 00:06:52.192 14:14:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71796 ]] 00:06:52.192 14:14:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71796 00:06:52.192 14:14:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71796 ']' 00:06:52.192 14:14:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71796 00:06:52.192 14:14:33 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:52.192 14:14:33 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.192 14:14:33 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71796 00:06:52.192 killing process with pid 71796 00:06:52.192 14:14:33 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:52.193 14:14:33 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:52.193 14:14:33 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71796' 00:06:52.193 14:14:33 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71796 00:06:52.193 14:14:33 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71796 00:06:52.453 14:14:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.453 14:14:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:52.453 14:14:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71778 ]] 00:06:52.453 14:14:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71778 00:06:52.453 Process with pid 71778 is not found 00:06:52.453 14:14:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71778 ']' 00:06:52.453 14:14:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71778 00:06:52.453 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71778) - No such process 00:06:52.453 14:14:34 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71778 is not found' 00:06:52.453 14:14:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71796 ]] 00:06:52.453 14:14:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71796 00:06:52.453 Process with pid 71796 is not found 00:06:52.453 14:14:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71796 ']' 00:06:52.453 14:14:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71796 00:06:52.453 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71796) - No such process 00:06:52.453 14:14:34 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71796 is not found' 00:06:52.453 14:14:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.453 00:06:52.453 real 0m16.267s 00:06:52.453 user 0m28.400s 00:06:52.453 sys 0m4.325s 00:06:52.453 14:14:34 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.453 14:14:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.453 ************************************ 00:06:52.453 END TEST cpu_locks 00:06:52.453 ************************************ 00:06:52.453 ************************************ 00:06:52.453 END TEST event 00:06:52.453 ************************************ 00:06:52.453 00:06:52.453 real 0m42.566s 00:06:52.453 user 1m22.320s 00:06:52.453 sys 0m7.211s 00:06:52.453 14:14:34 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.453 14:14:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.453 14:14:34 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:52.453 14:14:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.453 14:14:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.453 14:14:34 -- common/autotest_common.sh@10 -- # set +x 00:06:52.453 ************************************ 00:06:52.453 START TEST thread 00:06:52.453 ************************************ 00:06:52.453 14:14:34 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:52.453 * Looking for test storage... 00:06:52.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:52.453 14:14:34 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:52.453 14:14:34 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:52.453 14:14:34 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:52.715 14:14:34 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:52.715 14:14:34 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.715 14:14:34 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.715 14:14:34 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.715 14:14:34 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.715 14:14:34 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.715 14:14:34 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.715 14:14:34 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.715 14:14:34 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.715 14:14:34 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.715 14:14:34 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.715 14:14:34 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.715 14:14:34 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:52.715 14:14:34 thread -- scripts/common.sh@345 -- # : 1 00:06:52.715 14:14:34 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.715 14:14:34 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.715 14:14:34 thread -- scripts/common.sh@365 -- # decimal 1 00:06:52.715 14:14:34 thread -- scripts/common.sh@353 -- # local d=1 00:06:52.715 14:14:34 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.715 14:14:34 thread -- scripts/common.sh@355 -- # echo 1 00:06:52.715 14:14:34 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.715 14:14:34 thread -- scripts/common.sh@366 -- # decimal 2 00:06:52.715 14:14:34 thread -- scripts/common.sh@353 -- # local d=2 00:06:52.715 14:14:34 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.715 14:14:34 thread -- scripts/common.sh@355 -- # echo 2 00:06:52.715 14:14:34 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.715 14:14:34 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.715 14:14:34 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.715 14:14:34 thread -- scripts/common.sh@368 -- # return 0 00:06:52.715 14:14:34 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.715 14:14:34 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:52.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.715 --rc genhtml_branch_coverage=1 00:06:52.715 --rc genhtml_function_coverage=1 00:06:52.715 --rc genhtml_legend=1 00:06:52.715 --rc geninfo_all_blocks=1 00:06:52.715 --rc geninfo_unexecuted_blocks=1 00:06:52.715 00:06:52.715 ' 00:06:52.715 14:14:34 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:52.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.715 --rc genhtml_branch_coverage=1 00:06:52.715 --rc genhtml_function_coverage=1 00:06:52.715 --rc genhtml_legend=1 00:06:52.715 --rc geninfo_all_blocks=1 00:06:52.715 --rc geninfo_unexecuted_blocks=1 00:06:52.715 00:06:52.715 ' 00:06:52.715 14:14:34 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:52.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.715 --rc genhtml_branch_coverage=1 00:06:52.715 --rc genhtml_function_coverage=1 00:06:52.715 --rc genhtml_legend=1 00:06:52.715 --rc geninfo_all_blocks=1 00:06:52.715 --rc geninfo_unexecuted_blocks=1 00:06:52.715 00:06:52.715 ' 00:06:52.715 14:14:34 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:52.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.715 --rc genhtml_branch_coverage=1 00:06:52.715 --rc genhtml_function_coverage=1 00:06:52.715 --rc genhtml_legend=1 00:06:52.715 --rc geninfo_all_blocks=1 00:06:52.715 --rc geninfo_unexecuted_blocks=1 00:06:52.715 00:06:52.715 ' 00:06:52.715 14:14:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.715 14:14:34 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:52.715 14:14:34 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.715 14:14:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.715 ************************************ 00:06:52.715 START TEST thread_poller_perf 00:06:52.715 ************************************ 00:06:52.715 14:14:34 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.715 [2024-11-29 14:14:34.304878] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:52.715 [2024-11-29 14:14:34.305000] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71923 ] 00:06:52.715 [2024-11-29 14:14:34.452897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.715 [2024-11-29 14:14:34.483589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.715 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:54.097 [2024-11-29T14:14:35.891Z] ====================================== 00:06:54.097 [2024-11-29T14:14:35.891Z] busy:2608071532 (cyc) 00:06:54.097 [2024-11-29T14:14:35.891Z] total_run_count: 306000 00:06:54.097 [2024-11-29T14:14:35.891Z] tsc_hz: 2600000000 (cyc) 00:06:54.097 [2024-11-29T14:14:35.891Z] ====================================== 00:06:54.097 [2024-11-29T14:14:35.891Z] poller_cost: 8523 (cyc), 3278 (nsec) 00:06:54.097 00:06:54.097 real 0m1.275s 00:06:54.097 user 0m1.101s 00:06:54.097 sys 0m0.066s 00:06:54.097 14:14:35 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.097 14:14:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.097 ************************************ 00:06:54.097 END TEST thread_poller_perf 00:06:54.097 ************************************ 00:06:54.097 14:14:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.097 14:14:35 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:54.097 14:14:35 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.097 14:14:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.097 ************************************ 00:06:54.097 START TEST thread_poller_perf 00:06:54.097 ************************************ 00:06:54.097 14:14:35 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.097 [2024-11-29 14:14:35.622036] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:54.097 [2024-11-29 14:14:35.622286] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71965 ] 00:06:54.097 [2024-11-29 14:14:35.770060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.097 [2024-11-29 14:14:35.800813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.097 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:55.482 [2024-11-29T14:14:37.276Z] ====================================== 00:06:55.482 [2024-11-29T14:14:37.276Z] busy:2603743324 (cyc) 00:06:55.482 [2024-11-29T14:14:37.276Z] total_run_count: 3967000 00:06:55.482 [2024-11-29T14:14:37.276Z] tsc_hz: 2600000000 (cyc) 00:06:55.482 [2024-11-29T14:14:37.276Z] ====================================== 00:06:55.482 [2024-11-29T14:14:37.276Z] poller_cost: 656 (cyc), 252 (nsec) 00:06:55.482 00:06:55.482 real 0m1.266s 00:06:55.482 user 0m1.085s 00:06:55.482 sys 0m0.074s 00:06:55.482 14:14:36 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.482 ************************************ 00:06:55.482 14:14:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.482 END TEST thread_poller_perf 00:06:55.482 ************************************ 00:06:55.482 14:14:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:55.482 ************************************ 00:06:55.482 END TEST thread 00:06:55.482 ************************************ 00:06:55.482 00:06:55.482 real 0m2.772s 00:06:55.482 user 0m2.312s 00:06:55.482 sys 0m0.241s 00:06:55.482 14:14:36 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.482 14:14:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.482 14:14:36 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:55.482 14:14:36 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.482 14:14:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.482 14:14:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.482 14:14:36 -- common/autotest_common.sh@10 -- # set +x 00:06:55.482 ************************************ 00:06:55.482 START TEST app_cmdline 00:06:55.482 ************************************ 00:06:55.482 14:14:36 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.482 * Looking for test storage... 00:06:55.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:55.482 14:14:36 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.482 14:14:36 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.482 14:14:36 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.482 14:14:37 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:55.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.482 14:14:37 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:55.482 14:14:37 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.482 14:14:37 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.482 --rc genhtml_branch_coverage=1 00:06:55.482 --rc genhtml_function_coverage=1 00:06:55.482 --rc genhtml_legend=1 00:06:55.482 --rc geninfo_all_blocks=1 00:06:55.482 --rc geninfo_unexecuted_blocks=1 00:06:55.482 00:06:55.482 ' 00:06:55.482 14:14:37 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.482 --rc genhtml_branch_coverage=1 00:06:55.482 --rc genhtml_function_coverage=1 00:06:55.482 --rc genhtml_legend=1 00:06:55.482 --rc geninfo_all_blocks=1 00:06:55.482 --rc geninfo_unexecuted_blocks=1 00:06:55.482 00:06:55.482 ' 00:06:55.482 14:14:37 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.482 --rc genhtml_branch_coverage=1 00:06:55.482 --rc genhtml_function_coverage=1 00:06:55.482 --rc genhtml_legend=1 00:06:55.482 --rc geninfo_all_blocks=1 00:06:55.482 --rc geninfo_unexecuted_blocks=1 00:06:55.482 00:06:55.482 ' 00:06:55.482 14:14:37 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.482 --rc genhtml_branch_coverage=1 00:06:55.482 --rc genhtml_function_coverage=1 00:06:55.482 --rc genhtml_legend=1 00:06:55.482 --rc geninfo_all_blocks=1 00:06:55.482 --rc geninfo_unexecuted_blocks=1 00:06:55.482 00:06:55.482 ' 00:06:55.482 14:14:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:55.482 14:14:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72043 00:06:55.482 14:14:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72043 00:06:55.482 14:14:37 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72043 ']' 00:06:55.482 14:14:37 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.482 14:14:37 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.482 14:14:37 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.482 14:14:37 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:55.482 14:14:37 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.482 14:14:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.482 [2024-11-29 14:14:37.131744] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:55.482 [2024-11-29 14:14:37.132032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72043 ] 00:06:55.743 [2024-11-29 14:14:37.281847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.743 [2024-11-29 14:14:37.312678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.347 14:14:37 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.347 14:14:37 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:56.347 14:14:37 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:56.631 { 00:06:56.631 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:56.631 "fields": { 00:06:56.631 "major": 24, 00:06:56.631 "minor": 9, 00:06:56.631 "patch": 1, 00:06:56.631 "suffix": "-pre", 00:06:56.631 "commit": "b18e1bd62" 00:06:56.631 } 00:06:56.631 } 00:06:56.631 14:14:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:56.631 14:14:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:56.631 14:14:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:56.631 14:14:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:56.631 14:14:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:56.631 14:14:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:56.631 14:14:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.631 14:14:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:56.631 14:14:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:56.631 14:14:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.631 request: 00:06:56.631 { 00:06:56.631 "method": "env_dpdk_get_mem_stats", 00:06:56.631 "req_id": 1 00:06:56.631 } 00:06:56.631 Got JSON-RPC error response 00:06:56.631 response: 00:06:56.631 { 00:06:56.631 "code": -32601, 00:06:56.631 "message": "Method not found" 00:06:56.631 } 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.631 14:14:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72043 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72043 ']' 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72043 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72043 00:06:56.631 killing process with pid 72043 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72043' 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@969 -- # kill 72043 00:06:56.631 14:14:38 app_cmdline -- common/autotest_common.sh@974 -- # wait 72043 00:06:57.205 00:06:57.205 real 0m1.878s 00:06:57.205 user 0m2.172s 00:06:57.205 sys 0m0.428s 00:06:57.205 14:14:38 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.205 14:14:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.205 ************************************ 00:06:57.205 END TEST app_cmdline 00:06:57.205 ************************************ 00:06:57.205 14:14:38 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:57.205 14:14:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.205 14:14:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.205 14:14:38 -- common/autotest_common.sh@10 -- # set +x 00:06:57.205 ************************************ 00:06:57.205 START TEST version 00:06:57.205 ************************************ 00:06:57.205 14:14:38 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:57.205 * Looking for test storage... 00:06:57.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:57.205 14:14:38 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:57.205 14:14:38 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:57.205 14:14:38 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:57.466 14:14:38 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:57.466 14:14:38 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.466 14:14:38 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.466 14:14:38 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.466 14:14:38 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.466 14:14:38 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.466 14:14:38 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.466 14:14:38 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.466 14:14:38 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.466 14:14:38 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.466 14:14:38 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.466 14:14:38 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.466 14:14:38 version -- scripts/common.sh@344 -- # case "$op" in 00:06:57.466 14:14:38 version -- scripts/common.sh@345 -- # : 1 00:06:57.466 14:14:38 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.466 14:14:38 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.466 14:14:38 version -- scripts/common.sh@365 -- # decimal 1 00:06:57.466 14:14:38 version -- scripts/common.sh@353 -- # local d=1 00:06:57.466 14:14:38 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.466 14:14:39 version -- scripts/common.sh@355 -- # echo 1 00:06:57.466 14:14:39 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.466 14:14:39 version -- scripts/common.sh@366 -- # decimal 2 00:06:57.466 14:14:39 version -- scripts/common.sh@353 -- # local d=2 00:06:57.466 14:14:39 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.466 14:14:39 version -- scripts/common.sh@355 -- # echo 2 00:06:57.466 14:14:39 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.466 14:14:39 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.466 14:14:39 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.466 14:14:39 version -- scripts/common.sh@368 -- # return 0 00:06:57.466 14:14:39 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.466 14:14:39 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:57.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.466 --rc genhtml_branch_coverage=1 00:06:57.466 --rc genhtml_function_coverage=1 00:06:57.466 --rc genhtml_legend=1 00:06:57.466 --rc geninfo_all_blocks=1 00:06:57.466 --rc geninfo_unexecuted_blocks=1 00:06:57.466 00:06:57.466 ' 00:06:57.466 14:14:39 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:57.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.466 --rc genhtml_branch_coverage=1 00:06:57.466 --rc genhtml_function_coverage=1 00:06:57.466 --rc genhtml_legend=1 00:06:57.466 --rc geninfo_all_blocks=1 00:06:57.466 --rc geninfo_unexecuted_blocks=1 00:06:57.466 00:06:57.466 ' 00:06:57.466 14:14:39 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:57.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.466 --rc genhtml_branch_coverage=1 00:06:57.466 --rc genhtml_function_coverage=1 00:06:57.466 --rc genhtml_legend=1 00:06:57.466 --rc geninfo_all_blocks=1 00:06:57.466 --rc geninfo_unexecuted_blocks=1 00:06:57.466 00:06:57.466 ' 00:06:57.466 14:14:39 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:57.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.466 --rc genhtml_branch_coverage=1 00:06:57.466 --rc genhtml_function_coverage=1 00:06:57.466 --rc genhtml_legend=1 00:06:57.466 --rc geninfo_all_blocks=1 00:06:57.466 --rc geninfo_unexecuted_blocks=1 00:06:57.466 00:06:57.466 ' 00:06:57.466 14:14:39 version -- app/version.sh@17 -- # get_header_version major 00:06:57.466 14:14:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.466 14:14:39 version -- app/version.sh@14 -- # cut -f2 00:06:57.466 14:14:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.466 14:14:39 version -- app/version.sh@17 -- # major=24 00:06:57.466 14:14:39 version -- app/version.sh@18 -- # get_header_version minor 00:06:57.466 14:14:39 version -- app/version.sh@14 -- # cut -f2 00:06:57.466 14:14:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.466 14:14:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.466 14:14:39 version -- app/version.sh@18 -- # minor=9 00:06:57.466 14:14:39 version -- app/version.sh@19 -- # get_header_version patch 00:06:57.466 14:14:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.466 14:14:39 version -- app/version.sh@14 -- # cut -f2 00:06:57.466 14:14:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.466 14:14:39 version -- app/version.sh@19 -- # patch=1 00:06:57.466 14:14:39 version -- app/version.sh@20 -- # get_header_version suffix 00:06:57.466 14:14:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.466 14:14:39 version -- app/version.sh@14 -- # cut -f2 00:06:57.466 14:14:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.466 14:14:39 version -- app/version.sh@20 -- # suffix=-pre 00:06:57.466 14:14:39 version -- app/version.sh@22 -- # version=24.9 00:06:57.466 14:14:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:57.466 14:14:39 version -- app/version.sh@25 -- # version=24.9.1 00:06:57.466 14:14:39 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:57.466 14:14:39 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:57.466 14:14:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:57.466 14:14:39 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:57.466 14:14:39 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:57.466 ************************************ 00:06:57.466 END TEST version 00:06:57.466 ************************************ 00:06:57.466 00:06:57.466 real 0m0.215s 00:06:57.466 user 0m0.137s 00:06:57.466 sys 0m0.107s 00:06:57.466 14:14:39 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.466 14:14:39 version -- common/autotest_common.sh@10 -- # set +x 00:06:57.466 14:14:39 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:57.466 14:14:39 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:57.466 14:14:39 -- spdk/autotest.sh@194 -- # uname -s 00:06:57.466 14:14:39 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:57.466 14:14:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:57.466 14:14:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:57.466 14:14:39 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:57.466 14:14:39 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:57.466 14:14:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:57.466 14:14:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.466 14:14:39 -- common/autotest_common.sh@10 -- # set +x 00:06:57.466 ************************************ 00:06:57.466 START TEST blockdev_nvme 00:06:57.466 ************************************ 00:06:57.466 14:14:39 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:57.466 * Looking for test storage... 00:06:57.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:57.466 14:14:39 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:57.466 14:14:39 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:57.466 14:14:39 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:57.728 14:14:39 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.728 14:14:39 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:57.728 14:14:39 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.728 14:14:39 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:57.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.728 --rc genhtml_branch_coverage=1 00:06:57.728 --rc genhtml_function_coverage=1 00:06:57.728 --rc genhtml_legend=1 00:06:57.728 --rc geninfo_all_blocks=1 00:06:57.728 --rc geninfo_unexecuted_blocks=1 00:06:57.728 00:06:57.728 ' 00:06:57.728 14:14:39 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:57.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.728 --rc genhtml_branch_coverage=1 00:06:57.728 --rc genhtml_function_coverage=1 00:06:57.728 --rc genhtml_legend=1 00:06:57.728 --rc geninfo_all_blocks=1 00:06:57.728 --rc geninfo_unexecuted_blocks=1 00:06:57.728 00:06:57.728 ' 00:06:57.728 14:14:39 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:57.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.728 --rc genhtml_branch_coverage=1 00:06:57.728 --rc genhtml_function_coverage=1 00:06:57.728 --rc genhtml_legend=1 00:06:57.728 --rc geninfo_all_blocks=1 00:06:57.728 --rc geninfo_unexecuted_blocks=1 00:06:57.728 00:06:57.728 ' 00:06:57.728 14:14:39 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:57.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.728 --rc genhtml_branch_coverage=1 00:06:57.728 --rc genhtml_function_coverage=1 00:06:57.728 --rc genhtml_legend=1 00:06:57.728 --rc geninfo_all_blocks=1 00:06:57.728 --rc geninfo_unexecuted_blocks=1 00:06:57.728 00:06:57.728 ' 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:57.728 14:14:39 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:57.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72204 00:06:57.728 14:14:39 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:57.729 14:14:39 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 72204 00:06:57.729 14:14:39 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 72204 ']' 00:06:57.729 14:14:39 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.729 14:14:39 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.729 14:14:39 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.729 14:14:39 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.729 14:14:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.729 14:14:39 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:57.729 [2024-11-29 14:14:39.360928] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:57.729 [2024-11-29 14:14:39.361204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72204 ] 00:06:57.729 [2024-11-29 14:14:39.504066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.988 [2024-11-29 14:14:39.576748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.556 14:14:40 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.556 14:14:40 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:58.556 14:14:40 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:58.556 14:14:40 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:58.556 14:14:40 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:58.556 14:14:40 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:58.556 14:14:40 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:58.556 14:14:40 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:58.556 14:14:40 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.556 14:14:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.815 14:14:40 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.815 14:14:40 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:58.815 14:14:40 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.815 14:14:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.815 14:14:40 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.815 14:14:40 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:58.815 14:14:40 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:58.815 14:14:40 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.815 14:14:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.815 14:14:40 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.816 14:14:40 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:58.816 14:14:40 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.816 14:14:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.816 14:14:40 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.816 14:14:40 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:58.816 14:14:40 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.816 14:14:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.816 14:14:40 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.816 14:14:40 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:58.816 14:14:40 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:58.816 14:14:40 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.816 14:14:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.816 14:14:40 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:58.816 14:14:40 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.816 14:14:40 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:58.816 14:14:40 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:59.076 14:14:40 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ad41383d-478a-4e84-8821-bb673974e410"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ad41383d-478a-4e84-8821-bb673974e410",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "3b910d4f-a1ed-40e5-8ead-cf0f823a3c36"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3b910d4f-a1ed-40e5-8ead-cf0f823a3c36",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "56403029-fc99-48dc-91cc-8ce3a7f695c6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "56403029-fc99-48dc-91cc-8ce3a7f695c6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ee3c96ce-1f0e-42bb-b18c-d31991a1b1f5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ee3c96ce-1f0e-42bb-b18c-d31991a1b1f5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "7fe7b1e9-473b-4a06-b738-672f84c7e4ed"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7fe7b1e9-473b-4a06-b738-672f84c7e4ed",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4a73dd7a-2377-44e2-af9d-d2408630c498"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4a73dd7a-2377-44e2-af9d-d2408630c498",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:59.076 14:14:40 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:59.076 14:14:40 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:59.076 14:14:40 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:59.076 14:14:40 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 72204 00:06:59.076 14:14:40 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 72204 ']' 00:06:59.076 14:14:40 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 72204 00:06:59.076 14:14:40 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:59.076 14:14:40 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.076 14:14:40 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72204 00:06:59.076 killing process with pid 72204 00:06:59.076 14:14:40 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.076 14:14:40 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.076 14:14:40 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72204' 00:06:59.076 14:14:40 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 72204 00:06:59.076 14:14:40 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 72204 00:06:59.336 14:14:41 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:59.336 14:14:41 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:59.336 14:14:41 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:59.336 14:14:41 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.336 14:14:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.596 ************************************ 00:06:59.596 START TEST bdev_hello_world 00:06:59.596 ************************************ 00:06:59.596 14:14:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:59.596 [2024-11-29 14:14:41.195204] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:59.596 [2024-11-29 14:14:41.195320] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72277 ] 00:06:59.596 [2024-11-29 14:14:41.343325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.596 [2024-11-29 14:14:41.386214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.166 [2024-11-29 14:14:41.771838] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:00.166 [2024-11-29 14:14:41.771900] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:00.166 [2024-11-29 14:14:41.771923] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:00.166 [2024-11-29 14:14:41.774397] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:00.166 [2024-11-29 14:14:41.775318] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:00.166 [2024-11-29 14:14:41.775431] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:00.166 [2024-11-29 14:14:41.776007] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:00.166 00:07:00.166 [2024-11-29 14:14:41.776114] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:00.426 00:07:00.426 ************************************ 00:07:00.426 END TEST bdev_hello_world 00:07:00.426 ************************************ 00:07:00.426 real 0m0.842s 00:07:00.426 user 0m0.566s 00:07:00.426 sys 0m0.171s 00:07:00.426 14:14:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.426 14:14:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:00.426 14:14:42 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:00.426 14:14:42 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:00.426 14:14:42 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.426 14:14:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.426 ************************************ 00:07:00.426 START TEST bdev_bounds 00:07:00.426 ************************************ 00:07:00.426 14:14:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:00.426 14:14:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72308 00:07:00.426 14:14:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:00.426 14:14:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.426 14:14:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72308' 00:07:00.426 Process bdevio pid: 72308 00:07:00.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.426 14:14:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72308 00:07:00.426 14:14:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 72308 ']' 00:07:00.426 14:14:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.426 14:14:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.426 14:14:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.426 14:14:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.426 14:14:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:00.426 [2024-11-29 14:14:42.082040] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:00.426 [2024-11-29 14:14:42.082182] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72308 ] 00:07:00.777 [2024-11-29 14:14:42.225996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.777 [2024-11-29 14:14:42.271216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.777 [2024-11-29 14:14:42.271533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.777 [2024-11-29 14:14:42.271614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.356 14:14:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.356 14:14:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:01.356 14:14:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:01.356 I/O targets: 00:07:01.356 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:01.356 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:01.356 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.356 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.356 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.356 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:01.356 00:07:01.356 00:07:01.356 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.356 http://cunit.sourceforge.net/ 00:07:01.356 00:07:01.356 00:07:01.356 Suite: bdevio tests on: Nvme3n1 00:07:01.356 Test: blockdev write read block ...passed 00:07:01.356 Test: blockdev write zeroes read block ...passed 00:07:01.356 Test: blockdev write zeroes read no split ...passed 00:07:01.356 Test: blockdev write zeroes read split ...passed 00:07:01.356 Test: blockdev write zeroes read split partial ...passed 00:07:01.356 Test: blockdev reset ...[2024-11-29 14:14:43.039556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:01.356 [2024-11-29 14:14:43.041872] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:01.356 passed 00:07:01.356 Test: blockdev write read 8 blocks ...passed 00:07:01.356 Test: blockdev write read size > 128k ...passed 00:07:01.356 Test: blockdev write read invalid size ...passed 00:07:01.356 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.356 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.356 Test: blockdev write read max offset ...passed 00:07:01.356 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.356 Test: blockdev writev readv 8 blocks ...passed 00:07:01.356 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.356 Test: blockdev writev readv block ...passed 00:07:01.356 Test: blockdev writev readv size > 128k ...passed 00:07:01.356 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.356 Test: blockdev comparev and writev ...[2024-11-29 14:14:43.048181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cbc06000 len:0x1000 00:07:01.356 passed 00:07:01.356 Test: blockdev nvme passthru rw ...[2024-11-29 14:14:43.048325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.356 passed 00:07:01.356 Test: blockdev nvme passthru vendor specific ...passed 00:07:01.356 Test: blockdev nvme admin passthru ...[2024-11-29 14:14:43.048851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.356 [2024-11-29 14:14:43.048893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.356 passed 00:07:01.356 Test: blockdev copy ...passed 00:07:01.356 Suite: bdevio tests on: Nvme2n3 00:07:01.356 Test: blockdev write read block ...passed 00:07:01.356 Test: blockdev write zeroes read block ...passed 00:07:01.356 Test: blockdev write zeroes read no split ...passed 00:07:01.356 Test: blockdev write zeroes read split ...passed 00:07:01.356 Test: blockdev write zeroes read split partial ...passed 00:07:01.356 Test: blockdev reset ...[2024-11-29 14:14:43.062377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:01.356 passed 00:07:01.356 Test: blockdev write read 8 blocks ...[2024-11-29 14:14:43.064419] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:01.356 passed 00:07:01.356 Test: blockdev write read size > 128k ...passed 00:07:01.356 Test: blockdev write read invalid size ...passed 00:07:01.356 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.356 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.356 Test: blockdev write read max offset ...passed 00:07:01.356 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.356 Test: blockdev writev readv 8 blocks ...passed 00:07:01.356 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.356 Test: blockdev writev readv block ...passed 00:07:01.356 Test: blockdev writev readv size > 128k ...passed 00:07:01.356 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.356 Test: blockdev comparev and writev ...[2024-11-29 14:14:43.067997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2de805000 len:0x1000 00:07:01.356 [2024-11-29 14:14:43.068035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.356 passed 00:07:01.356 Test: blockdev nvme passthru rw ...passed 00:07:01.356 Test: blockdev nvme passthru vendor specific ...passed 00:07:01.356 Test: blockdev nvme admin passthru ...[2024-11-29 14:14:43.068424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.356 [2024-11-29 14:14:43.068450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.356 passed 00:07:01.356 Test: blockdev copy ...passed 00:07:01.356 Suite: bdevio tests on: Nvme2n2 00:07:01.356 Test: blockdev write read block ...passed 00:07:01.356 Test: blockdev write zeroes read block ...passed 00:07:01.356 Test: blockdev write zeroes read no split ...passed 00:07:01.356 Test: blockdev write zeroes read split ...passed 00:07:01.356 Test: blockdev write zeroes read split partial ...passed 00:07:01.356 Test: blockdev reset ...[2024-11-29 14:14:43.082855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:01.356 [2024-11-29 14:14:43.084731] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:01.356 passed 00:07:01.356 Test: blockdev write read 8 blocks ...passed 00:07:01.356 Test: blockdev write read size > 128k ...passed 00:07:01.356 Test: blockdev write read invalid size ...passed 00:07:01.356 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.356 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.356 Test: blockdev write read max offset ...passed 00:07:01.356 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.356 Test: blockdev writev readv 8 blocks ...passed 00:07:01.356 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.357 Test: blockdev writev readv block ...passed 00:07:01.357 Test: blockdev writev readv size > 128k ...passed 00:07:01.357 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.357 Test: blockdev comparev and writev ...[2024-11-29 14:14:43.088505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dec36000 len:0x1000 00:07:01.357 [2024-11-29 14:14:43.088538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.357 passed 00:07:01.357 Test: blockdev nvme passthru rw ...passed 00:07:01.357 Test: blockdev nvme passthru vendor specific ...passed 00:07:01.357 Test: blockdev nvme admin passthru ...[2024-11-29 14:14:43.089058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.357 [2024-11-29 14:14:43.089084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.357 passed 00:07:01.357 Test: blockdev copy ...passed 00:07:01.357 Suite: bdevio tests on: Nvme2n1 00:07:01.357 Test: blockdev write read block ...passed 00:07:01.357 Test: blockdev write zeroes read block ...passed 00:07:01.357 Test: blockdev write zeroes read no split ...passed 00:07:01.357 Test: blockdev write zeroes read split ...passed 00:07:01.357 Test: blockdev write zeroes read split partial ...passed 00:07:01.357 Test: blockdev reset ...[2024-11-29 14:14:43.103199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:01.357 passed 00:07:01.357 Test: blockdev write read 8 blocks ...[2024-11-29 14:14:43.104842] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:01.357 passed 00:07:01.357 Test: blockdev write read size > 128k ...passed 00:07:01.357 Test: blockdev write read invalid size ...passed 00:07:01.357 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.357 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.357 Test: blockdev write read max offset ...passed 00:07:01.357 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.357 Test: blockdev writev readv 8 blocks ...passed 00:07:01.357 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.357 Test: blockdev writev readv block ...passed 00:07:01.357 Test: blockdev writev readv size > 128k ...passed 00:07:01.357 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.357 Test: blockdev comparev and writev ...[2024-11-29 14:14:43.108618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dec30000 len:0x1000 00:07:01.357 [2024-11-29 14:14:43.108653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.357 passed 00:07:01.357 Test: blockdev nvme passthru rw ...passed 00:07:01.357 Test: blockdev nvme passthru vendor specific ...passed 00:07:01.357 Test: blockdev nvme admin passthru ...[2024-11-29 14:14:43.109163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.357 [2024-11-29 14:14:43.109189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.357 passed 00:07:01.357 Test: blockdev copy ...passed 00:07:01.357 Suite: bdevio tests on: Nvme1n1 00:07:01.357 Test: blockdev write read block ...passed 00:07:01.357 Test: blockdev write zeroes read block ...passed 00:07:01.357 Test: blockdev write zeroes read no split ...passed 00:07:01.357 Test: blockdev write zeroes read split ...passed 00:07:01.357 Test: blockdev write zeroes read split partial ...passed 00:07:01.357 Test: blockdev reset ...[2024-11-29 14:14:43.122860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:01.357 [2024-11-29 14:14:43.124425] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:01.357 passed 00:07:01.357 Test: blockdev write read 8 blocks ...passed 00:07:01.357 Test: blockdev write read size > 128k ...passed 00:07:01.357 Test: blockdev write read invalid size ...passed 00:07:01.357 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.357 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.357 Test: blockdev write read max offset ...passed 00:07:01.357 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.357 Test: blockdev writev readv 8 blocks ...passed 00:07:01.357 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.357 Test: blockdev writev readv block ...passed 00:07:01.357 Test: blockdev writev readv size > 128k ...passed 00:07:01.357 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.357 Test: blockdev comparev and writev ...[2024-11-29 14:14:43.128041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dec2c000 len:0x1000 00:07:01.357 [2024-11-29 14:14:43.128074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.357 passed 00:07:01.357 Test: blockdev nvme passthru rw ...passed 00:07:01.357 Test: blockdev nvme passthru vendor specific ...passed 00:07:01.357 Test: blockdev nvme admin passthru ...[2024-11-29 14:14:43.128500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.357 [2024-11-29 14:14:43.128527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.357 passed 00:07:01.357 Test: blockdev copy ...passed 00:07:01.357 Suite: bdevio tests on: Nvme0n1 00:07:01.357 Test: blockdev write read block ...passed 00:07:01.357 Test: blockdev write zeroes read block ...passed 00:07:01.357 Test: blockdev write zeroes read no split ...passed 00:07:01.357 Test: blockdev write zeroes read split ...passed 00:07:01.357 Test: blockdev write zeroes read split partial ...passed 00:07:01.357 Test: blockdev reset ...[2024-11-29 14:14:43.142614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:01.357 [2024-11-29 14:14:43.144072] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:01.357 passed 00:07:01.357 Test: blockdev write read 8 blocks ...passed 00:07:01.357 Test: blockdev write read size > 128k ...passed 00:07:01.357 Test: blockdev write read invalid size ...passed 00:07:01.357 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.357 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.357 Test: blockdev write read max offset ...passed 00:07:01.357 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.357 Test: blockdev writev readv 8 blocks ...passed 00:07:01.357 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.357 Test: blockdev writev readv block ...passed 00:07:01.357 Test: blockdev writev readv size > 128k ...passed 00:07:01.357 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.357 Test: blockdev comparev and writev ...passed 00:07:01.357 Test: blockdev nvme passthru rw ...[2024-11-29 14:14:43.147075] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:01.357 separate metadata which is not supported yet. 00:07:01.357 passed 00:07:01.357 Test: blockdev nvme passthru vendor specific ...[2024-11-29 14:14:43.147367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:07:01.357 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:01.357 [2024-11-29 14:14:43.147511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:01.619 passed 00:07:01.619 Test: blockdev copy ...passed 00:07:01.619 00:07:01.619 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.619 suites 6 6 n/a 0 0 00:07:01.619 tests 138 138 138 0 0 00:07:01.619 asserts 893 893 893 0 n/a 00:07:01.619 00:07:01.619 Elapsed time = 0.306 seconds 00:07:01.619 0 00:07:01.619 14:14:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72308 00:07:01.619 14:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 72308 ']' 00:07:01.619 14:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 72308 00:07:01.619 14:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:01.619 14:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.619 14:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72308 00:07:01.619 14:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.619 14:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.619 14:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72308' 00:07:01.619 killing process with pid 72308 00:07:01.619 14:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 72308 00:07:01.619 14:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 72308 00:07:01.619 14:14:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:01.619 00:07:01.619 real 0m1.317s 00:07:01.619 user 0m3.300s 00:07:01.619 sys 0m0.269s 00:07:01.619 14:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.619 14:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:01.619 ************************************ 00:07:01.619 END TEST bdev_bounds 00:07:01.619 ************************************ 00:07:01.619 14:14:43 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:01.619 14:14:43 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:01.619 14:14:43 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.619 14:14:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.619 ************************************ 00:07:01.619 START TEST bdev_nbd 00:07:01.619 ************************************ 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72351 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72351 /var/tmp/spdk-nbd.sock 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 72351 ']' 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.619 14:14:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:01.879 [2024-11-29 14:14:43.437220] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.879 [2024-11-29 14:14:43.437526] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.879 [2024-11-29 14:14:43.581539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.879 [2024-11-29 14:14:43.613823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.821 1+0 records in 00:07:02.821 1+0 records out 00:07:02.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353318 s, 11.6 MB/s 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.821 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.080 1+0 records in 00:07:03.080 1+0 records out 00:07:03.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033066 s, 12.4 MB/s 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.080 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.340 1+0 records in 00:07:03.340 1+0 records out 00:07:03.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307029 s, 13.3 MB/s 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.340 14:14:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.599 1+0 records in 00:07:03.599 1+0 records out 00:07:03.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287065 s, 14.3 MB/s 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.599 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.859 1+0 records in 00:07:03.859 1+0 records out 00:07:03.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523636 s, 7.8 MB/s 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.859 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.120 1+0 records in 00:07:04.120 1+0 records out 00:07:04.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297144 s, 13.8 MB/s 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:04.120 { 00:07:04.120 "nbd_device": "/dev/nbd0", 00:07:04.120 "bdev_name": "Nvme0n1" 00:07:04.120 }, 00:07:04.120 { 00:07:04.120 "nbd_device": "/dev/nbd1", 00:07:04.120 "bdev_name": "Nvme1n1" 00:07:04.120 }, 00:07:04.120 { 00:07:04.120 "nbd_device": "/dev/nbd2", 00:07:04.120 "bdev_name": "Nvme2n1" 00:07:04.120 }, 00:07:04.120 { 00:07:04.120 "nbd_device": "/dev/nbd3", 00:07:04.120 "bdev_name": "Nvme2n2" 00:07:04.120 }, 00:07:04.120 { 00:07:04.120 "nbd_device": "/dev/nbd4", 00:07:04.120 "bdev_name": "Nvme2n3" 00:07:04.120 }, 00:07:04.120 { 00:07:04.120 "nbd_device": "/dev/nbd5", 00:07:04.120 "bdev_name": "Nvme3n1" 00:07:04.120 } 00:07:04.120 ]' 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:04.120 { 00:07:04.120 "nbd_device": "/dev/nbd0", 00:07:04.120 "bdev_name": "Nvme0n1" 00:07:04.120 }, 00:07:04.120 { 00:07:04.120 "nbd_device": "/dev/nbd1", 00:07:04.120 "bdev_name": "Nvme1n1" 00:07:04.120 }, 00:07:04.120 { 00:07:04.120 "nbd_device": "/dev/nbd2", 00:07:04.120 "bdev_name": "Nvme2n1" 00:07:04.120 }, 00:07:04.120 { 00:07:04.120 "nbd_device": "/dev/nbd3", 00:07:04.120 "bdev_name": "Nvme2n2" 00:07:04.120 }, 00:07:04.120 { 00:07:04.120 "nbd_device": "/dev/nbd4", 00:07:04.120 "bdev_name": "Nvme2n3" 00:07:04.120 }, 00:07:04.120 { 00:07:04.120 "nbd_device": "/dev/nbd5", 00:07:04.120 "bdev_name": "Nvme3n1" 00:07:04.120 } 00:07:04.120 ]' 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.120 14:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.381 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.381 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.381 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.381 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.381 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.381 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.381 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.381 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.381 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.381 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.642 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.642 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.642 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.642 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.642 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.642 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.642 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.642 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.642 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.642 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:04.903 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:04.903 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:04.903 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:04.903 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.903 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.903 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:04.903 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.903 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.903 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.903 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.165 14:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:05.425 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:05.425 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:05.425 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:05.425 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.425 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.425 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:05.425 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.425 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.425 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.425 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.425 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.685 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.686 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:05.947 /dev/nbd0 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.947 1+0 records in 00:07:05.947 1+0 records out 00:07:05.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000753844 s, 5.4 MB/s 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.947 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:06.207 /dev/nbd1 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.207 1+0 records in 00:07:06.207 1+0 records out 00:07:06.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460042 s, 8.9 MB/s 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:06.207 14:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:06.468 /dev/nbd10 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.468 1+0 records in 00:07:06.468 1+0 records out 00:07:06.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499097 s, 8.2 MB/s 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:06.468 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:06.729 /dev/nbd11 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.729 1+0 records in 00:07:06.729 1+0 records out 00:07:06.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406211 s, 10.1 MB/s 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:06.729 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:06.989 /dev/nbd12 00:07:06.989 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:06.989 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:06.989 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:06.989 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:06.989 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:06.989 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:06.989 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:06.989 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:06.989 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:06.990 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:06.990 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.990 1+0 records in 00:07:06.990 1+0 records out 00:07:06.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306273 s, 13.4 MB/s 00:07:06.990 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.990 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:06.990 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.990 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:06.990 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:06.990 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.990 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:06.990 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:06.990 /dev/nbd13 00:07:06.990 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.249 1+0 records in 00:07:07.249 1+0 records out 00:07:07.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049335 s, 8.3 MB/s 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:07.249 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.250 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.250 14:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.250 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.250 { 00:07:07.250 "nbd_device": "/dev/nbd0", 00:07:07.250 "bdev_name": "Nvme0n1" 00:07:07.250 }, 00:07:07.250 { 00:07:07.250 "nbd_device": "/dev/nbd1", 00:07:07.250 "bdev_name": "Nvme1n1" 00:07:07.250 }, 00:07:07.250 { 00:07:07.250 "nbd_device": "/dev/nbd10", 00:07:07.250 "bdev_name": "Nvme2n1" 00:07:07.250 }, 00:07:07.250 { 00:07:07.250 "nbd_device": "/dev/nbd11", 00:07:07.250 "bdev_name": "Nvme2n2" 00:07:07.250 }, 00:07:07.250 { 00:07:07.250 "nbd_device": "/dev/nbd12", 00:07:07.250 "bdev_name": "Nvme2n3" 00:07:07.250 }, 00:07:07.250 { 00:07:07.250 "nbd_device": "/dev/nbd13", 00:07:07.250 "bdev_name": "Nvme3n1" 00:07:07.250 } 00:07:07.250 ]' 00:07:07.250 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.250 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.250 { 00:07:07.250 "nbd_device": "/dev/nbd0", 00:07:07.250 "bdev_name": "Nvme0n1" 00:07:07.250 }, 00:07:07.250 { 00:07:07.250 "nbd_device": "/dev/nbd1", 00:07:07.250 "bdev_name": "Nvme1n1" 00:07:07.250 }, 00:07:07.250 { 00:07:07.250 "nbd_device": "/dev/nbd10", 00:07:07.250 "bdev_name": "Nvme2n1" 00:07:07.250 }, 00:07:07.250 { 00:07:07.250 "nbd_device": "/dev/nbd11", 00:07:07.250 "bdev_name": "Nvme2n2" 00:07:07.250 }, 00:07:07.250 { 00:07:07.250 "nbd_device": "/dev/nbd12", 00:07:07.250 "bdev_name": "Nvme2n3" 00:07:07.250 }, 00:07:07.250 { 00:07:07.250 "nbd_device": "/dev/nbd13", 00:07:07.250 "bdev_name": "Nvme3n1" 00:07:07.250 } 00:07:07.250 ]' 00:07:07.250 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:07.250 /dev/nbd1 00:07:07.250 /dev/nbd10 00:07:07.250 /dev/nbd11 00:07:07.250 /dev/nbd12 00:07:07.250 /dev/nbd13' 00:07:07.250 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.250 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:07.250 /dev/nbd1 00:07:07.250 /dev/nbd10 00:07:07.250 /dev/nbd11 00:07:07.250 /dev/nbd12 00:07:07.250 /dev/nbd13' 00:07:07.250 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:07.510 256+0 records in 00:07:07.510 256+0 records out 00:07:07.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125176 s, 83.8 MB/s 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:07.510 256+0 records in 00:07:07.510 256+0 records out 00:07:07.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0586778 s, 17.9 MB/s 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:07.510 256+0 records in 00:07:07.510 256+0 records out 00:07:07.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.061781 s, 17.0 MB/s 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:07.510 256+0 records in 00:07:07.510 256+0 records out 00:07:07.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.061493 s, 17.1 MB/s 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.510 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:07.770 256+0 records in 00:07:07.770 256+0 records out 00:07:07.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0598627 s, 17.5 MB/s 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:07.770 256+0 records in 00:07:07.770 256+0 records out 00:07:07.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0610558 s, 17.2 MB/s 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:07.770 256+0 records in 00:07:07.770 256+0 records out 00:07:07.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0600247 s, 17.5 MB/s 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.770 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:08.031 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.031 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.031 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.031 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.031 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.031 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.031 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.031 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.031 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.031 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:08.292 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:08.292 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:08.292 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:08.292 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.292 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.292 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:08.292 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.292 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.292 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.292 14:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.553 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:08.813 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:08.813 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:08.813 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:08.813 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.813 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.813 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:08.813 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.813 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.813 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.813 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:09.073 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:09.073 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:09.073 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:09.073 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.073 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.073 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:09.073 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.073 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.073 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.073 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.073 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:09.334 14:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:09.594 malloc_lvol_verify 00:07:09.594 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:09.594 556de869-8b8f-4c49-8227-52980219e6c5 00:07:09.594 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:09.853 ccf86168-1f57-46ce-b470-3541c3133858 00:07:09.853 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:10.114 /dev/nbd0 00:07:10.114 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:10.114 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:10.114 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:10.114 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:10.114 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:10.114 mke2fs 1.47.0 (5-Feb-2023) 00:07:10.114 Discarding device blocks: 0/4096 done 00:07:10.114 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:10.114 00:07:10.114 Allocating group tables: 0/1 done 00:07:10.114 Writing inode tables: 0/1 done 00:07:10.114 Creating journal (1024 blocks): done 00:07:10.114 Writing superblocks and filesystem accounting information: 0/1 done 00:07:10.114 00:07:10.114 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:10.114 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.114 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:10.114 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.114 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:10.114 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.114 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.375 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.375 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.375 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.375 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.375 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.375 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.375 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.375 14:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.375 14:14:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72351 00:07:10.375 14:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 72351 ']' 00:07:10.375 14:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 72351 00:07:10.375 14:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:10.375 14:14:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.375 14:14:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72351 00:07:10.375 14:14:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.375 14:14:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.375 killing process with pid 72351 00:07:10.375 14:14:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72351' 00:07:10.375 14:14:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 72351 00:07:10.375 14:14:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 72351 00:07:13.680 14:14:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:13.680 00:07:13.680 real 0m11.804s 00:07:13.680 user 0m15.364s 00:07:13.680 sys 0m3.531s 00:07:13.680 14:14:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.680 14:14:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:13.680 ************************************ 00:07:13.680 END TEST bdev_nbd 00:07:13.680 ************************************ 00:07:13.680 14:14:55 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:13.680 14:14:55 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:13.680 skipping fio tests on NVMe due to multi-ns failures. 00:07:13.680 14:14:55 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:13.680 14:14:55 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:13.680 14:14:55 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:13.680 14:14:55 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:13.680 14:14:55 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.680 14:14:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:13.680 ************************************ 00:07:13.680 START TEST bdev_verify 00:07:13.680 ************************************ 00:07:13.680 14:14:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:13.680 [2024-11-29 14:14:55.291532] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:13.680 [2024-11-29 14:14:55.291651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72731 ] 00:07:13.680 [2024-11-29 14:14:55.439053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.680 [2024-11-29 14:14:55.471889] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.680 [2024-11-29 14:14:55.471940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.252 Running I/O for 5 seconds... 00:07:16.576 23744.00 IOPS, 92.75 MiB/s [2024-11-29T14:14:59.309Z] 25600.00 IOPS, 100.00 MiB/s [2024-11-29T14:15:00.250Z] 25834.67 IOPS, 100.92 MiB/s [2024-11-29T14:15:01.194Z] 26176.00 IOPS, 102.25 MiB/s [2024-11-29T14:15:01.194Z] 25830.40 IOPS, 100.90 MiB/s 00:07:19.400 Latency(us) 00:07:19.400 [2024-11-29T14:15:01.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.400 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.400 Verification LBA range: start 0x0 length 0xbd0bd 00:07:19.400 Nvme0n1 : 5.06 2163.67 8.45 0.00 0.00 58891.16 6503.19 86305.87 00:07:19.400 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.400 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:19.400 Nvme0n1 : 5.07 2094.55 8.18 0.00 0.00 60975.23 10637.00 79853.10 00:07:19.400 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.400 Verification LBA range: start 0x0 length 0xa0000 00:07:19.400 Nvme1n1 : 5.07 2171.23 8.48 0.00 0.00 58739.84 9931.22 81466.29 00:07:19.400 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.400 Verification LBA range: start 0xa0000 length 0xa0000 00:07:19.400 Nvme1n1 : 5.07 2093.73 8.18 0.00 0.00 60831.98 11594.83 69367.34 00:07:19.400 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.400 Verification LBA range: start 0x0 length 0x80000 00:07:19.400 Nvme2n1 : 5.07 2169.98 8.48 0.00 0.00 58659.99 11998.13 77433.30 00:07:19.400 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.400 Verification LBA range: start 0x80000 length 0x80000 00:07:19.400 Nvme2n1 : 5.08 2092.00 8.17 0.00 0.00 60717.91 14014.62 58074.98 00:07:19.400 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.400 Verification LBA range: start 0x0 length 0x80000 00:07:19.401 Nvme2n2 : 5.08 2168.88 8.47 0.00 0.00 58568.77 12905.55 76223.41 00:07:19.401 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.401 Verification LBA range: start 0x80000 length 0x80000 00:07:19.401 Nvme2n2 : 5.08 2090.81 8.17 0.00 0.00 60585.02 14417.92 55655.19 00:07:19.401 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.401 Verification LBA range: start 0x0 length 0x80000 00:07:19.401 Nvme2n3 : 5.08 2168.17 8.47 0.00 0.00 58477.62 12098.95 80256.39 00:07:19.401 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.401 Verification LBA range: start 0x80000 length 0x80000 00:07:19.401 Nvme2n3 : 5.08 2090.26 8.17 0.00 0.00 60461.97 13812.97 60494.77 00:07:19.401 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.401 Verification LBA range: start 0x0 length 0x20000 00:07:19.401 Nvme3n1 : 5.08 2167.70 8.47 0.00 0.00 58379.88 7561.85 84692.68 00:07:19.401 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.401 Verification LBA range: start 0x20000 length 0x20000 00:07:19.401 Nvme3n1 : 5.08 2089.69 8.16 0.00 0.00 60400.56 12905.55 67350.84 00:07:19.401 [2024-11-29T14:15:01.195Z] =================================================================================================================== 00:07:19.401 [2024-11-29T14:15:01.195Z] Total : 25560.68 99.85 0.00 0.00 59623.06 6503.19 86305.87 00:07:19.971 00:07:19.971 real 0m6.294s 00:07:19.971 user 0m11.680s 00:07:19.971 sys 0m0.205s 00:07:19.971 14:15:01 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.971 14:15:01 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:19.971 ************************************ 00:07:19.971 END TEST bdev_verify 00:07:19.971 ************************************ 00:07:19.971 14:15:01 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:19.971 14:15:01 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:19.971 14:15:01 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.971 14:15:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:19.971 ************************************ 00:07:19.971 START TEST bdev_verify_big_io 00:07:19.971 ************************************ 00:07:19.971 14:15:01 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:19.971 [2024-11-29 14:15:01.630308] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:19.971 [2024-11-29 14:15:01.630420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72818 ] 00:07:20.231 [2024-11-29 14:15:01.777073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:20.231 [2024-11-29 14:15:01.809876] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.231 [2024-11-29 14:15:01.809954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.529 Running I/O for 5 seconds... 00:07:25.231 159.00 IOPS, 9.94 MiB/s [2024-11-29T14:15:07.959Z] 1879.50 IOPS, 117.47 MiB/s [2024-11-29T14:15:08.525Z] 1836.67 IOPS, 114.79 MiB/s 00:07:26.731 Latency(us) 00:07:26.731 [2024-11-29T14:15:08.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.731 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.731 Verification LBA range: start 0x0 length 0xbd0b 00:07:26.731 Nvme0n1 : 5.74 132.86 8.30 0.00 0.00 931436.56 13812.97 1180857.90 00:07:26.731 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.731 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:26.731 Nvme0n1 : 5.74 111.50 6.97 0.00 0.00 1100757.11 12855.14 1277649.53 00:07:26.731 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.731 Verification LBA range: start 0x0 length 0xa000 00:07:26.731 Nvme1n1 : 5.75 130.00 8.12 0.00 0.00 907717.67 65737.65 980821.86 00:07:26.731 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.731 Verification LBA range: start 0xa000 length 0xa000 00:07:26.731 Nvme1n1 : 5.79 107.83 6.74 0.00 0.00 1102335.56 44564.48 1561571.64 00:07:26.731 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.731 Verification LBA range: start 0x0 length 0x8000 00:07:26.731 Nvme2n1 : 5.75 133.53 8.35 0.00 0.00 862618.65 137121.48 884030.23 00:07:26.731 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.731 Verification LBA range: start 0x8000 length 0x8000 00:07:26.731 Nvme2n1 : 5.93 113.07 7.07 0.00 0.00 1008128.17 68560.74 1729343.80 00:07:26.731 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.731 Verification LBA range: start 0x0 length 0x8000 00:07:26.731 Nvme2n2 : 5.92 140.52 8.78 0.00 0.00 794645.33 64527.75 871124.68 00:07:26.731 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.731 Verification LBA range: start 0x8000 length 0x8000 00:07:26.731 Nvme2n2 : 5.93 116.08 7.25 0.00 0.00 944830.11 68157.44 1593835.52 00:07:26.731 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.731 Verification LBA range: start 0x0 length 0x8000 00:07:26.731 Nvme2n3 : 6.00 149.35 9.33 0.00 0.00 726176.69 51622.20 955010.76 00:07:26.731 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.731 Verification LBA range: start 0x8000 length 0x8000 00:07:26.731 Nvme2n3 : 6.02 134.83 8.43 0.00 0.00 785588.71 20366.57 1413157.81 00:07:26.731 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.731 Verification LBA range: start 0x0 length 0x2000 00:07:26.731 Nvme3n1 : 6.06 165.36 10.33 0.00 0.00 635651.77 82.71 1051802.39 00:07:26.731 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.731 Verification LBA range: start 0x2000 length 0x2000 00:07:26.731 Nvme3n1 : 6.14 196.08 12.26 0.00 0.00 524903.24 283.57 1832588.21 00:07:26.731 [2024-11-29T14:15:08.525Z] =================================================================================================================== 00:07:26.731 [2024-11-29T14:15:08.525Z] Total : 1631.01 101.94 0.00 0.00 828081.80 82.71 1832588.21 00:07:27.667 00:07:27.667 real 0m7.594s 00:07:27.667 user 0m14.472s 00:07:27.667 sys 0m0.220s 00:07:27.667 14:15:09 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.667 14:15:09 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:27.667 ************************************ 00:07:27.667 END TEST bdev_verify_big_io 00:07:27.667 ************************************ 00:07:27.667 14:15:09 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.667 14:15:09 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:27.667 14:15:09 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.667 14:15:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.667 ************************************ 00:07:27.667 START TEST bdev_write_zeroes 00:07:27.667 ************************************ 00:07:27.667 14:15:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.667 [2024-11-29 14:15:09.261374] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:27.667 [2024-11-29 14:15:09.261464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72918 ] 00:07:27.667 [2024-11-29 14:15:09.405627] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.667 [2024-11-29 14:15:09.439329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.240 Running I/O for 1 seconds... 00:07:29.177 74112.00 IOPS, 289.50 MiB/s 00:07:29.177 Latency(us) 00:07:29.177 [2024-11-29T14:15:10.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.177 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.177 Nvme0n1 : 1.02 12290.80 48.01 0.00 0.00 10394.07 8570.09 20064.10 00:07:29.177 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.177 Nvme1n1 : 1.02 12276.70 47.96 0.00 0.00 10390.79 8620.50 19459.15 00:07:29.177 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.177 Nvme2n1 : 1.02 12262.73 47.90 0.00 0.00 10382.05 8519.68 18854.20 00:07:29.177 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.177 Nvme2n2 : 1.02 12248.66 47.85 0.00 0.00 10366.41 8116.38 18450.90 00:07:29.177 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.177 Nvme2n3 : 1.03 12234.73 47.79 0.00 0.00 10353.67 6856.07 18652.55 00:07:29.177 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.177 Nvme3n1 : 1.03 12220.90 47.74 0.00 0.00 10346.28 6225.92 20064.10 00:07:29.177 [2024-11-29T14:15:10.971Z] =================================================================================================================== 00:07:29.177 [2024-11-29T14:15:10.971Z] Total : 73534.52 287.24 0.00 0.00 10372.21 6225.92 20064.10 00:07:29.436 00:07:29.436 real 0m1.824s 00:07:29.436 user 0m1.544s 00:07:29.436 sys 0m0.170s 00:07:29.436 14:15:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.436 14:15:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:29.436 ************************************ 00:07:29.436 END TEST bdev_write_zeroes 00:07:29.436 ************************************ 00:07:29.436 14:15:11 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.436 14:15:11 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:29.436 14:15:11 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.436 14:15:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.436 ************************************ 00:07:29.436 START TEST bdev_json_nonenclosed 00:07:29.436 ************************************ 00:07:29.436 14:15:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.436 [2024-11-29 14:15:11.128676] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:29.436 [2024-11-29 14:15:11.128794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72959 ] 00:07:29.695 [2024-11-29 14:15:11.280509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.695 [2024-11-29 14:15:11.314003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.695 [2024-11-29 14:15:11.314089] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:29.695 [2024-11-29 14:15:11.314107] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:29.695 [2024-11-29 14:15:11.314117] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.695 00:07:29.695 real 0m0.322s 00:07:29.695 user 0m0.125s 00:07:29.695 sys 0m0.093s 00:07:29.695 14:15:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.695 14:15:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:29.695 ************************************ 00:07:29.695 END TEST bdev_json_nonenclosed 00:07:29.695 ************************************ 00:07:29.695 14:15:11 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.695 14:15:11 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:29.695 14:15:11 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.695 14:15:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.695 ************************************ 00:07:29.695 START TEST bdev_json_nonarray 00:07:29.695 ************************************ 00:07:29.695 14:15:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.954 [2024-11-29 14:15:11.490837] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:29.954 [2024-11-29 14:15:11.490947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72981 ] 00:07:29.954 [2024-11-29 14:15:11.631262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.954 [2024-11-29 14:15:11.664252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.954 [2024-11-29 14:15:11.664342] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:29.954 [2024-11-29 14:15:11.664360] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:29.954 [2024-11-29 14:15:11.664374] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.954 00:07:29.954 real 0m0.310s 00:07:29.954 user 0m0.121s 00:07:29.954 sys 0m0.087s 00:07:29.954 14:15:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.954 14:15:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:29.954 ************************************ 00:07:29.954 END TEST bdev_json_nonarray 00:07:29.954 ************************************ 00:07:30.213 14:15:11 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:30.213 14:15:11 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:30.213 14:15:11 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:30.213 14:15:11 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:30.213 14:15:11 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:30.213 14:15:11 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:30.213 14:15:11 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:30.213 14:15:11 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:30.213 14:15:11 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:30.213 14:15:11 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:30.213 14:15:11 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:30.213 00:07:30.213 real 0m32.659s 00:07:30.213 user 0m49.088s 00:07:30.213 sys 0m5.634s 00:07:30.213 14:15:11 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.213 14:15:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.213 ************************************ 00:07:30.213 END TEST blockdev_nvme 00:07:30.213 ************************************ 00:07:30.213 14:15:11 -- spdk/autotest.sh@209 -- # uname -s 00:07:30.213 14:15:11 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:30.213 14:15:11 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:30.213 14:15:11 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.213 14:15:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.213 14:15:11 -- common/autotest_common.sh@10 -- # set +x 00:07:30.213 ************************************ 00:07:30.213 START TEST blockdev_nvme_gpt 00:07:30.213 ************************************ 00:07:30.213 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:30.213 * Looking for test storage... 00:07:30.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:30.213 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:30.213 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:30.213 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:30.213 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.213 14:15:11 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:30.213 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.213 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:30.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.213 --rc genhtml_branch_coverage=1 00:07:30.213 --rc genhtml_function_coverage=1 00:07:30.213 --rc genhtml_legend=1 00:07:30.213 --rc geninfo_all_blocks=1 00:07:30.213 --rc geninfo_unexecuted_blocks=1 00:07:30.213 00:07:30.213 ' 00:07:30.213 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:30.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.213 --rc genhtml_branch_coverage=1 00:07:30.213 --rc genhtml_function_coverage=1 00:07:30.213 --rc genhtml_legend=1 00:07:30.213 --rc geninfo_all_blocks=1 00:07:30.213 --rc geninfo_unexecuted_blocks=1 00:07:30.213 00:07:30.213 ' 00:07:30.213 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:30.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.213 --rc genhtml_branch_coverage=1 00:07:30.213 --rc genhtml_function_coverage=1 00:07:30.213 --rc genhtml_legend=1 00:07:30.213 --rc geninfo_all_blocks=1 00:07:30.213 --rc geninfo_unexecuted_blocks=1 00:07:30.213 00:07:30.213 ' 00:07:30.213 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:30.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.213 --rc genhtml_branch_coverage=1 00:07:30.213 --rc genhtml_function_coverage=1 00:07:30.213 --rc genhtml_legend=1 00:07:30.213 --rc geninfo_all_blocks=1 00:07:30.213 --rc geninfo_unexecuted_blocks=1 00:07:30.213 00:07:30.213 ' 00:07:30.213 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:30.213 14:15:11 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:30.213 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73054 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 73054 00:07:30.214 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 73054 ']' 00:07:30.214 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.214 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.214 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.214 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.214 14:15:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.214 14:15:11 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:30.472 [2024-11-29 14:15:12.039290] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:30.472 [2024-11-29 14:15:12.039407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73054 ] 00:07:30.472 [2024-11-29 14:15:12.185537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.472 [2024-11-29 14:15:12.218846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.403 14:15:12 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.403 14:15:12 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:31.403 14:15:12 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:31.403 14:15:12 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:31.403 14:15:12 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:31.403 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:31.660 Waiting for block devices as requested 00:07:31.660 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.660 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.660 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.660 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:36.956 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:36.956 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:36.956 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:36.956 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:36.956 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:36.956 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:36.956 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:36.956 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:36.956 BYT; 00:07:36.956 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:36.956 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:36.956 BYT; 00:07:36.956 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:36.956 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:36.956 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:36.956 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:36.957 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:36.957 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:36.957 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:36.957 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:36.957 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:36.957 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:36.957 14:15:18 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:36.957 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:36.957 14:15:18 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:37.889 The operation has completed successfully. 00:07:37.889 14:15:19 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:38.822 The operation has completed successfully. 00:07:38.822 14:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:39.387 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.958 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.958 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.958 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.958 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.958 14:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:39.958 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.958 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.958 [] 00:07:39.958 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.958 14:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:39.958 14:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:39.958 14:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:39.958 14:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:39.958 14:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:39.959 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.959 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.220 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.220 14:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:40.220 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.220 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.221 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.221 14:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:40.221 14:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:40.221 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.221 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.221 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.221 14:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:40.221 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.221 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.221 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.221 14:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:40.221 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.221 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.221 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.221 14:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:40.221 14:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:40.221 14:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:40.221 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.221 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.221 14:15:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.221 14:15:22 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:40.221 14:15:22 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:40.221 14:15:22 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "705c2eed-7b63-42d4-81e0-033e5bca2198"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "705c2eed-7b63-42d4-81e0-033e5bca2198",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6a3326c2-830a-4ab5-bab4-ca333c801c4f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6a3326c2-830a-4ab5-bab4-ca333c801c4f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ef9114ca-ecff-4301-9f79-5f0c8a83d686"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ef9114ca-ecff-4301-9f79-5f0c8a83d686",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "78a1f566-3392-4b2f-8474-bc8f050c98eb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "78a1f566-3392-4b2f-8474-bc8f050c98eb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0f45073d-d143-49dd-a986-f4b3e6a7928e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0f45073d-d143-49dd-a986-f4b3e6a7928e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:40.483 14:15:22 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:40.483 14:15:22 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:40.483 14:15:22 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:40.483 14:15:22 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 73054 00:07:40.483 14:15:22 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 73054 ']' 00:07:40.483 14:15:22 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 73054 00:07:40.483 14:15:22 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:40.483 14:15:22 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.483 14:15:22 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73054 00:07:40.483 14:15:22 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.483 killing process with pid 73054 00:07:40.483 14:15:22 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.483 14:15:22 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73054' 00:07:40.484 14:15:22 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 73054 00:07:40.484 14:15:22 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 73054 00:07:40.745 14:15:22 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:40.745 14:15:22 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:40.745 14:15:22 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:40.745 14:15:22 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.745 14:15:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.745 ************************************ 00:07:40.745 START TEST bdev_hello_world 00:07:40.745 ************************************ 00:07:40.745 14:15:22 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:40.745 [2024-11-29 14:15:22.469132] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.745 [2024-11-29 14:15:22.469256] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73670 ] 00:07:41.006 [2024-11-29 14:15:22.615986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.006 [2024-11-29 14:15:22.658938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.267 [2024-11-29 14:15:23.048320] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:41.267 [2024-11-29 14:15:23.048373] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:41.267 [2024-11-29 14:15:23.048392] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:41.267 [2024-11-29 14:15:23.050530] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:41.267 [2024-11-29 14:15:23.051096] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:41.267 [2024-11-29 14:15:23.051126] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:41.267 [2024-11-29 14:15:23.051333] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:41.267 00:07:41.267 [2024-11-29 14:15:23.051364] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:41.528 00:07:41.528 real 0m0.844s 00:07:41.528 user 0m0.552s 00:07:41.528 sys 0m0.188s 00:07:41.528 14:15:23 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.528 14:15:23 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:41.528 ************************************ 00:07:41.528 END TEST bdev_hello_world 00:07:41.528 ************************************ 00:07:41.528 14:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:41.528 14:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.528 14:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.528 14:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.528 ************************************ 00:07:41.528 START TEST bdev_bounds 00:07:41.528 ************************************ 00:07:41.528 14:15:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:41.528 14:15:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73701 00:07:41.528 14:15:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:41.528 Process bdevio pid: 73701 00:07:41.528 14:15:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73701' 00:07:41.528 14:15:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73701 00:07:41.528 14:15:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73701 ']' 00:07:41.528 14:15:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.528 14:15:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.528 14:15:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:41.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.528 14:15:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.528 14:15:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.528 14:15:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:41.789 [2024-11-29 14:15:23.351260] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:41.789 [2024-11-29 14:15:23.351383] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73701 ] 00:07:41.789 [2024-11-29 14:15:23.500528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.789 [2024-11-29 14:15:23.545175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.789 [2024-11-29 14:15:23.545481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.789 [2024-11-29 14:15:23.545526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.733 14:15:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.733 14:15:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:42.733 14:15:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:42.733 I/O targets: 00:07:42.733 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:42.733 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:42.733 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:42.733 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.733 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.733 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.733 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:42.733 00:07:42.733 00:07:42.733 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.733 http://cunit.sourceforge.net/ 00:07:42.733 00:07:42.733 00:07:42.733 Suite: bdevio tests on: Nvme3n1 00:07:42.733 Test: blockdev write read block ...passed 00:07:42.733 Test: blockdev write zeroes read block ...passed 00:07:42.733 Test: blockdev write zeroes read no split ...passed 00:07:42.733 Test: blockdev write zeroes read split ...passed 00:07:42.733 Test: blockdev write zeroes read split partial ...passed 00:07:42.733 Test: blockdev reset ...[2024-11-29 14:15:24.286118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:42.733 [2024-11-29 14:15:24.288020] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.733 passed 00:07:42.733 Test: blockdev write read 8 blocks ...passed 00:07:42.733 Test: blockdev write read size > 128k ...passed 00:07:42.733 Test: blockdev write read invalid size ...passed 00:07:42.733 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.733 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.733 Test: blockdev write read max offset ...passed 00:07:42.733 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.733 Test: blockdev writev readv 8 blocks ...passed 00:07:42.733 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.733 Test: blockdev writev readv block ...passed 00:07:42.733 Test: blockdev writev readv size > 128k ...passed 00:07:42.733 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.733 Test: blockdev comparev and writev ...[2024-11-29 14:15:24.292942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5c0e000 len:0x1000 00:07:42.733 [2024-11-29 14:15:24.292992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.733 passed 00:07:42.733 Test: blockdev nvme passthru rw ...passed 00:07:42.733 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.733 Test: blockdev nvme admin passthru ...[2024-11-29 14:15:24.293468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.733 [2024-11-29 14:15:24.293508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.733 passed 00:07:42.733 Test: blockdev copy ...passed 00:07:42.733 Suite: bdevio tests on: Nvme2n3 00:07:42.733 Test: blockdev write read block ...passed 00:07:42.733 Test: blockdev write zeroes read block ...passed 00:07:42.733 Test: blockdev write zeroes read no split ...passed 00:07:42.733 Test: blockdev write zeroes read split ...passed 00:07:42.733 Test: blockdev write zeroes read split partial ...passed 00:07:42.733 Test: blockdev reset ...[2024-11-29 14:15:24.308468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:42.733 [2024-11-29 14:15:24.310377] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.733 passed 00:07:42.733 Test: blockdev write read 8 blocks ...passed 00:07:42.733 Test: blockdev write read size > 128k ...passed 00:07:42.733 Test: blockdev write read invalid size ...passed 00:07:42.733 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.733 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.733 Test: blockdev write read max offset ...passed 00:07:42.733 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.733 Test: blockdev writev readv 8 blocks ...passed 00:07:42.733 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.733 Test: blockdev writev readv block ...passed 00:07:42.733 Test: blockdev writev readv size > 128k ...passed 00:07:42.733 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.733 Test: blockdev comparev and writev ...[2024-11-29 14:15:24.316886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5c0a000 len:0x1000 00:07:42.733 [2024-11-29 14:15:24.316997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.733 passed 00:07:42.733 Test: blockdev nvme passthru rw ...passed 00:07:42.733 Test: blockdev nvme passthru vendor specific ...[2024-11-29 14:15:24.318135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.733 passed 00:07:42.733 Test: blockdev nvme admin passthru ...[2024-11-29 14:15:24.318230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.733 passed 00:07:42.733 Test: blockdev copy ...passed 00:07:42.733 Suite: bdevio tests on: Nvme2n2 00:07:42.733 Test: blockdev write read block ...passed 00:07:42.733 Test: blockdev write zeroes read block ...passed 00:07:42.733 Test: blockdev write zeroes read no split ...passed 00:07:42.733 Test: blockdev write zeroes read split ...passed 00:07:42.733 Test: blockdev write zeroes read split partial ...passed 00:07:42.733 Test: blockdev reset ...[2024-11-29 14:15:24.331794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:42.733 [2024-11-29 14:15:24.333729] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.733 passed 00:07:42.733 Test: blockdev write read 8 blocks ...passed 00:07:42.733 Test: blockdev write read size > 128k ...passed 00:07:42.733 Test: blockdev write read invalid size ...passed 00:07:42.733 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.733 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.733 Test: blockdev write read max offset ...passed 00:07:42.733 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.734 Test: blockdev writev readv 8 blocks ...passed 00:07:42.734 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.734 Test: blockdev writev readv block ...passed 00:07:42.734 Test: blockdev writev readv size > 128k ...passed 00:07:42.734 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.734 Test: blockdev comparev and writev ...[2024-11-29 14:15:24.339130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d9c05000 len:0x1000 00:07:42.734 [2024-11-29 14:15:24.339169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.734 passed 00:07:42.734 Test: blockdev nvme passthru rw ...passed 00:07:42.734 Test: blockdev nvme passthru vendor specific ...[2024-11-29 14:15:24.339851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.734 [2024-11-29 14:15:24.339879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.734 passed 00:07:42.734 Test: blockdev nvme admin passthru ...passed 00:07:42.734 Test: blockdev copy ...passed 00:07:42.734 Suite: bdevio tests on: Nvme2n1 00:07:42.734 Test: blockdev write read block ...passed 00:07:42.734 Test: blockdev write zeroes read block ...passed 00:07:42.734 Test: blockdev write zeroes read no split ...passed 00:07:42.734 Test: blockdev write zeroes read split ...passed 00:07:42.734 Test: blockdev write zeroes read split partial ...passed 00:07:42.734 Test: blockdev reset ...[2024-11-29 14:15:24.353193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:42.734 [2024-11-29 14:15:24.355093] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.734 passed 00:07:42.734 Test: blockdev write read 8 blocks ...passed 00:07:42.734 Test: blockdev write read size > 128k ...passed 00:07:42.734 Test: blockdev write read invalid size ...passed 00:07:42.734 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.734 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.734 Test: blockdev write read max offset ...passed 00:07:42.734 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.734 Test: blockdev writev readv 8 blocks ...passed 00:07:42.734 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.734 Test: blockdev writev readv block ...passed 00:07:42.734 Test: blockdev writev readv size > 128k ...passed 00:07:42.734 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.734 Test: blockdev comparev and writev ...[2024-11-29 14:15:24.361788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5802000 len:0x1000 00:07:42.734 [2024-11-29 14:15:24.361907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.734 passed 00:07:42.734 Test: blockdev nvme passthru rw ...passed 00:07:42.734 Test: blockdev nvme passthru vendor specific ...[2024-11-29 14:15:24.362947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.734 [2024-11-29 14:15:24.363028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.734 passed 00:07:42.734 Test: blockdev nvme admin passthru ...passed 00:07:42.734 Test: blockdev copy ...passed 00:07:42.734 Suite: bdevio tests on: Nvme1n1p2 00:07:42.734 Test: blockdev write read block ...passed 00:07:42.734 Test: blockdev write zeroes read block ...passed 00:07:42.734 Test: blockdev write zeroes read no split ...passed 00:07:42.734 Test: blockdev write zeroes read split ...passed 00:07:42.734 Test: blockdev write zeroes read split partial ...passed 00:07:42.734 Test: blockdev reset ...[2024-11-29 14:15:24.377024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:42.734 [2024-11-29 14:15:24.378727] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.734 passed 00:07:42.734 Test: blockdev write read 8 blocks ...passed 00:07:42.734 Test: blockdev write read size > 128k ...passed 00:07:42.734 Test: blockdev write read invalid size ...passed 00:07:42.734 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.734 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.734 Test: blockdev write read max offset ...passed 00:07:42.734 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.734 Test: blockdev writev readv 8 blocks ...passed 00:07:42.734 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.734 Test: blockdev writev readv block ...passed 00:07:42.734 Test: blockdev writev readv size > 128k ...passed 00:07:42.734 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.734 Test: blockdev comparev and writev ...[2024-11-29 14:15:24.383476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2dd63b000 len:0x1000 00:07:42.734 [2024-11-29 14:15:24.383531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.734 passed 00:07:42.734 Test: blockdev nvme passthru rw ...passed 00:07:42.734 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.734 Test: blockdev nvme admin passthru ...passed 00:07:42.734 Test: blockdev copy ...passed 00:07:42.734 Suite: bdevio tests on: Nvme1n1p1 00:07:42.734 Test: blockdev write read block ...passed 00:07:42.734 Test: blockdev write zeroes read block ...passed 00:07:42.734 Test: blockdev write zeroes read no split ...passed 00:07:42.734 Test: blockdev write zeroes read split ...passed 00:07:42.734 Test: blockdev write zeroes read split partial ...passed 00:07:42.734 Test: blockdev reset ...[2024-11-29 14:15:24.396703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:42.734 [2024-11-29 14:15:24.398903] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.734 passed 00:07:42.734 Test: blockdev write read 8 blocks ...passed 00:07:42.734 Test: blockdev write read size > 128k ...passed 00:07:42.734 Test: blockdev write read invalid size ...passed 00:07:42.734 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.734 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.734 Test: blockdev write read max offset ...passed 00:07:42.734 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.734 Test: blockdev writev readv 8 blocks ...passed 00:07:42.734 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.734 Test: blockdev writev readv block ...passed 00:07:42.734 Test: blockdev writev readv size > 128k ...passed 00:07:42.734 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.734 Test: blockdev comparev and writev ...[2024-11-29 14:15:24.404223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2dd637000 len:0x1000 00:07:42.734 [2024-11-29 14:15:24.404268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.734 passed 00:07:42.734 Test: blockdev nvme passthru rw ...passed 00:07:42.734 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.734 Test: blockdev nvme admin passthru ...passed 00:07:42.734 Test: blockdev copy ...passed 00:07:42.734 Suite: bdevio tests on: Nvme0n1 00:07:42.734 Test: blockdev write read block ...passed 00:07:42.734 Test: blockdev write zeroes read block ...passed 00:07:42.734 Test: blockdev write zeroes read no split ...passed 00:07:42.734 Test: blockdev write zeroes read split ...passed 00:07:42.734 Test: blockdev write zeroes read split partial ...passed 00:07:42.734 Test: blockdev reset ...[2024-11-29 14:15:24.417727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:42.734 [2024-11-29 14:15:24.419384] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.734 passed 00:07:42.734 Test: blockdev write read 8 blocks ...passed 00:07:42.734 Test: blockdev write read size > 128k ...passed 00:07:42.734 Test: blockdev write read invalid size ...passed 00:07:42.734 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.734 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.734 Test: blockdev write read max offset ...passed 00:07:42.734 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.734 Test: blockdev writev readv 8 blocks ...passed 00:07:42.734 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.734 Test: blockdev writev readv block ...passed 00:07:42.734 Test: blockdev writev readv size > 128k ...passed 00:07:42.734 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.734 Test: blockdev comparev and writev ...[2024-11-29 14:15:24.423924] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:42.734 separate metadata which is not supported yet. 00:07:42.734 passed 00:07:42.734 Test: blockdev nvme passthru rw ...passed 00:07:42.734 Test: blockdev nvme passthru vendor specific ...[2024-11-29 14:15:24.424350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:42.734 [2024-11-29 14:15:24.424396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:42.734 passed 00:07:42.734 Test: blockdev nvme admin passthru ...passed 00:07:42.734 Test: blockdev copy ...passed 00:07:42.734 00:07:42.734 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.734 suites 7 7 n/a 0 0 00:07:42.734 tests 161 161 161 0 0 00:07:42.734 asserts 1025 1025 1025 0 n/a 00:07:42.734 00:07:42.734 Elapsed time = 0.377 seconds 00:07:42.734 0 00:07:42.734 14:15:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73701 00:07:42.734 14:15:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73701 ']' 00:07:42.734 14:15:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73701 00:07:42.734 14:15:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:42.734 14:15:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.734 14:15:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73701 00:07:42.734 killing process with pid 73701 00:07:42.734 14:15:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.734 14:15:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.734 14:15:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73701' 00:07:42.734 14:15:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73701 00:07:42.734 14:15:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73701 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:42.996 00:07:42.996 real 0m1.364s 00:07:42.996 user 0m3.355s 00:07:42.996 sys 0m0.296s 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.996 ************************************ 00:07:42.996 END TEST bdev_bounds 00:07:42.996 ************************************ 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:42.996 14:15:24 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:42.996 14:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:42.996 14:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.996 14:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.996 ************************************ 00:07:42.996 START TEST bdev_nbd 00:07:42.996 ************************************ 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73749 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73749 /var/tmp/spdk-nbd.sock 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73749 ']' 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.996 14:15:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:42.996 [2024-11-29 14:15:24.752375] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:42.996 [2024-11-29 14:15:24.752499] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.258 [2024-11-29 14:15:24.899986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.258 [2024-11-29 14:15:24.942306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.827 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.827 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:43.828 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:43.828 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.828 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.828 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:43.828 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:43.828 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.828 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.828 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:43.828 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:43.828 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:43.828 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:43.828 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:43.828 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.088 1+0 records in 00:07:44.088 1+0 records out 00:07:44.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416996 s, 9.8 MB/s 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.088 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:44.349 14:15:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.349 1+0 records in 00:07:44.349 1+0 records out 00:07:44.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042399 s, 9.7 MB/s 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.349 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.611 1+0 records in 00:07:44.611 1+0 records out 00:07:44.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449559 s, 9.1 MB/s 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.611 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:44.873 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:44.873 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.874 1+0 records in 00:07:44.874 1+0 records out 00:07:44.874 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405594 s, 10.1 MB/s 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:44.874 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.136 1+0 records in 00:07:45.136 1+0 records out 00:07:45.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585607 s, 7.0 MB/s 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:45.136 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:45.397 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:45.397 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:45.397 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.397 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.398 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.398 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:45.398 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.398 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.398 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.398 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.398 1+0 records in 00:07:45.398 1+0 records out 00:07:45.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417974 s, 9.8 MB/s 00:07:45.398 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.398 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.398 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.398 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.398 14:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.398 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.398 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.398 14:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.398 1+0 records in 00:07:45.398 1+0 records out 00:07:45.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350197 s, 11.7 MB/s 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.398 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.659 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:45.659 { 00:07:45.659 "nbd_device": "/dev/nbd0", 00:07:45.659 "bdev_name": "Nvme0n1" 00:07:45.659 }, 00:07:45.659 { 00:07:45.659 "nbd_device": "/dev/nbd1", 00:07:45.659 "bdev_name": "Nvme1n1p1" 00:07:45.659 }, 00:07:45.659 { 00:07:45.659 "nbd_device": "/dev/nbd2", 00:07:45.659 "bdev_name": "Nvme1n1p2" 00:07:45.659 }, 00:07:45.659 { 00:07:45.659 "nbd_device": "/dev/nbd3", 00:07:45.659 "bdev_name": "Nvme2n1" 00:07:45.659 }, 00:07:45.659 { 00:07:45.659 "nbd_device": "/dev/nbd4", 00:07:45.659 "bdev_name": "Nvme2n2" 00:07:45.659 }, 00:07:45.659 { 00:07:45.659 "nbd_device": "/dev/nbd5", 00:07:45.659 "bdev_name": "Nvme2n3" 00:07:45.659 }, 00:07:45.659 { 00:07:45.659 "nbd_device": "/dev/nbd6", 00:07:45.659 "bdev_name": "Nvme3n1" 00:07:45.659 } 00:07:45.659 ]' 00:07:45.659 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:45.659 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:45.659 { 00:07:45.659 "nbd_device": "/dev/nbd0", 00:07:45.659 "bdev_name": "Nvme0n1" 00:07:45.659 }, 00:07:45.659 { 00:07:45.659 "nbd_device": "/dev/nbd1", 00:07:45.659 "bdev_name": "Nvme1n1p1" 00:07:45.659 }, 00:07:45.659 { 00:07:45.659 "nbd_device": "/dev/nbd2", 00:07:45.659 "bdev_name": "Nvme1n1p2" 00:07:45.659 }, 00:07:45.659 { 00:07:45.659 "nbd_device": "/dev/nbd3", 00:07:45.659 "bdev_name": "Nvme2n1" 00:07:45.659 }, 00:07:45.659 { 00:07:45.659 "nbd_device": "/dev/nbd4", 00:07:45.659 "bdev_name": "Nvme2n2" 00:07:45.659 }, 00:07:45.659 { 00:07:45.659 "nbd_device": "/dev/nbd5", 00:07:45.659 "bdev_name": "Nvme2n3" 00:07:45.659 }, 00:07:45.659 { 00:07:45.659 "nbd_device": "/dev/nbd6", 00:07:45.659 "bdev_name": "Nvme3n1" 00:07:45.659 } 00:07:45.659 ]' 00:07:45.659 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:45.659 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:45.659 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.659 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:45.659 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.659 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:45.659 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.659 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:45.921 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:45.921 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:45.921 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:45.921 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.921 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.921 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:45.921 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.921 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.921 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.921 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.269 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.269 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.269 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.269 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.269 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.269 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.269 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.269 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.269 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.269 14:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.541 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.542 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.542 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:46.802 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:46.802 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:46.802 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:46.802 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.802 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.802 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:46.802 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.802 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.802 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.802 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:47.062 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:47.062 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:47.062 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:47.062 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.062 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.062 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:47.062 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.062 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.062 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.062 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:47.321 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:47.321 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:47.321 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:47.321 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.321 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.321 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:47.321 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.321 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.321 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.321 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.321 14:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.321 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:47.321 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.321 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:47.582 /dev/nbd0 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.582 1+0 records in 00:07:47.582 1+0 records out 00:07:47.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688029 s, 6.0 MB/s 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.582 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:47.857 /dev/nbd1 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.857 1+0 records in 00:07:47.857 1+0 records out 00:07:47.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113066 s, 3.6 MB/s 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.857 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:48.119 /dev/nbd10 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.119 1+0 records in 00:07:48.119 1+0 records out 00:07:48.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000788079 s, 5.2 MB/s 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.119 14:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:48.381 /dev/nbd11 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.381 1+0 records in 00:07:48.381 1+0 records out 00:07:48.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712969 s, 5.7 MB/s 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.381 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:48.643 /dev/nbd12 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.643 1+0 records in 00:07:48.643 1+0 records out 00:07:48.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057426 s, 7.1 MB/s 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.643 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:48.912 /dev/nbd13 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.912 1+0 records in 00:07:48.912 1+0 records out 00:07:48.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116364 s, 3.5 MB/s 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.912 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:49.172 /dev/nbd14 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.173 1+0 records in 00:07:49.173 1+0 records out 00:07:49.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000809654 s, 5.1 MB/s 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.173 14:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:49.435 { 00:07:49.435 "nbd_device": "/dev/nbd0", 00:07:49.435 "bdev_name": "Nvme0n1" 00:07:49.435 }, 00:07:49.435 { 00:07:49.435 "nbd_device": "/dev/nbd1", 00:07:49.435 "bdev_name": "Nvme1n1p1" 00:07:49.435 }, 00:07:49.435 { 00:07:49.435 "nbd_device": "/dev/nbd10", 00:07:49.435 "bdev_name": "Nvme1n1p2" 00:07:49.435 }, 00:07:49.435 { 00:07:49.435 "nbd_device": "/dev/nbd11", 00:07:49.435 "bdev_name": "Nvme2n1" 00:07:49.435 }, 00:07:49.435 { 00:07:49.435 "nbd_device": "/dev/nbd12", 00:07:49.435 "bdev_name": "Nvme2n2" 00:07:49.435 }, 00:07:49.435 { 00:07:49.435 "nbd_device": "/dev/nbd13", 00:07:49.435 "bdev_name": "Nvme2n3" 00:07:49.435 }, 00:07:49.435 { 00:07:49.435 "nbd_device": "/dev/nbd14", 00:07:49.435 "bdev_name": "Nvme3n1" 00:07:49.435 } 00:07:49.435 ]' 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:49.435 { 00:07:49.435 "nbd_device": "/dev/nbd0", 00:07:49.435 "bdev_name": "Nvme0n1" 00:07:49.435 }, 00:07:49.435 { 00:07:49.435 "nbd_device": "/dev/nbd1", 00:07:49.435 "bdev_name": "Nvme1n1p1" 00:07:49.435 }, 00:07:49.435 { 00:07:49.435 "nbd_device": "/dev/nbd10", 00:07:49.435 "bdev_name": "Nvme1n1p2" 00:07:49.435 }, 00:07:49.435 { 00:07:49.435 "nbd_device": "/dev/nbd11", 00:07:49.435 "bdev_name": "Nvme2n1" 00:07:49.435 }, 00:07:49.435 { 00:07:49.435 "nbd_device": "/dev/nbd12", 00:07:49.435 "bdev_name": "Nvme2n2" 00:07:49.435 }, 00:07:49.435 { 00:07:49.435 "nbd_device": "/dev/nbd13", 00:07:49.435 "bdev_name": "Nvme2n3" 00:07:49.435 }, 00:07:49.435 { 00:07:49.435 "nbd_device": "/dev/nbd14", 00:07:49.435 "bdev_name": "Nvme3n1" 00:07:49.435 } 00:07:49.435 ]' 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:49.435 /dev/nbd1 00:07:49.435 /dev/nbd10 00:07:49.435 /dev/nbd11 00:07:49.435 /dev/nbd12 00:07:49.435 /dev/nbd13 00:07:49.435 /dev/nbd14' 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:49.435 /dev/nbd1 00:07:49.435 /dev/nbd10 00:07:49.435 /dev/nbd11 00:07:49.435 /dev/nbd12 00:07:49.435 /dev/nbd13 00:07:49.435 /dev/nbd14' 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:49.435 256+0 records in 00:07:49.435 256+0 records out 00:07:49.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0077222 s, 136 MB/s 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:49.435 256+0 records in 00:07:49.435 256+0 records out 00:07:49.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0769647 s, 13.6 MB/s 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.435 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:49.696 256+0 records in 00:07:49.696 256+0 records out 00:07:49.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0766402 s, 13.7 MB/s 00:07:49.696 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.696 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:49.696 256+0 records in 00:07:49.696 256+0 records out 00:07:49.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0787441 s, 13.3 MB/s 00:07:49.696 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.696 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:49.696 256+0 records in 00:07:49.696 256+0 records out 00:07:49.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0814157 s, 12.9 MB/s 00:07:49.696 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.696 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:49.958 256+0 records in 00:07:49.958 256+0 records out 00:07:49.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.087203 s, 12.0 MB/s 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:49.958 256+0 records in 00:07:49.958 256+0 records out 00:07:49.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.101361 s, 10.3 MB/s 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:49.958 256+0 records in 00:07:49.958 256+0 records out 00:07:49.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0780903 s, 13.4 MB/s 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:49.958 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.220 14:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:50.480 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:50.480 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:50.480 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:50.480 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.480 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.480 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:50.480 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.480 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.480 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.480 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:50.741 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:50.741 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:50.741 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:50.741 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.741 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.741 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:50.741 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.741 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.741 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.741 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:51.001 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:51.001 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:51.001 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:51.001 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.001 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.001 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:51.001 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.001 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.001 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.001 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:51.263 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:51.263 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:51.263 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:51.263 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.263 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.263 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:51.263 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.263 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.263 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.263 14:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:51.263 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:51.263 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:51.263 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:51.263 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.263 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.263 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:51.263 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.263 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.263 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.263 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:51.522 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:51.522 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:51.522 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:51.522 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.522 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.522 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:51.522 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.522 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.522 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.522 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.522 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:51.780 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:52.039 malloc_lvol_verify 00:07:52.039 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:52.297 a074e4b1-2f58-43ac-b5eb-232973a26556 00:07:52.297 14:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:52.297 bbbc70b3-f4b8-4960-af1f-5aa27307cbf2 00:07:52.556 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:52.556 /dev/nbd0 00:07:52.556 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:52.556 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:52.556 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:52.556 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:52.556 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:52.556 mke2fs 1.47.0 (5-Feb-2023) 00:07:52.556 Discarding device blocks: 0/4096 done 00:07:52.556 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:52.556 00:07:52.556 Allocating group tables: 0/1 done 00:07:52.556 Writing inode tables: 0/1 done 00:07:52.556 Creating journal (1024 blocks): done 00:07:52.556 Writing superblocks and filesystem accounting information: 0/1 done 00:07:52.556 00:07:52.556 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:52.556 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.556 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:52.556 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:52.556 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:52.556 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.556 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73749 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73749 ']' 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73749 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73749 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.815 killing process with pid 73749 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73749' 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73749 00:07:52.815 14:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73749 00:07:53.073 14:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:53.073 00:07:53.073 real 0m10.065s 00:07:53.073 user 0m14.593s 00:07:53.073 sys 0m3.495s 00:07:53.073 14:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.073 14:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:53.073 ************************************ 00:07:53.073 END TEST bdev_nbd 00:07:53.073 ************************************ 00:07:53.073 14:15:34 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:53.073 14:15:34 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:53.073 14:15:34 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:53.073 skipping fio tests on NVMe due to multi-ns failures. 00:07:53.073 14:15:34 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:53.073 14:15:34 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:53.073 14:15:34 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:53.073 14:15:34 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:53.073 14:15:34 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.073 14:15:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.073 ************************************ 00:07:53.073 START TEST bdev_verify 00:07:53.073 ************************************ 00:07:53.073 14:15:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:53.073 [2024-11-29 14:15:34.857892] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:53.073 [2024-11-29 14:15:34.857989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74151 ] 00:07:53.330 [2024-11-29 14:15:34.999241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:53.330 [2024-11-29 14:15:35.039845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.330 [2024-11-29 14:15:35.039877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.893 Running I/O for 5 seconds... 00:07:56.200 23296.00 IOPS, 91.00 MiB/s [2024-11-29T14:15:38.930Z] 24000.00 IOPS, 93.75 MiB/s [2024-11-29T14:15:39.868Z] 24597.33 IOPS, 96.08 MiB/s [2024-11-29T14:15:40.821Z] 24848.00 IOPS, 97.06 MiB/s [2024-11-29T14:15:40.821Z] 24819.20 IOPS, 96.95 MiB/s 00:07:59.027 Latency(us) 00:07:59.027 [2024-11-29T14:15:40.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.027 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.027 Verification LBA range: start 0x0 length 0xbd0bd 00:07:59.027 Nvme0n1 : 5.05 1824.93 7.13 0.00 0.00 69915.03 14014.62 76626.71 00:07:59.027 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.027 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:59.027 Nvme0n1 : 5.07 1691.46 6.61 0.00 0.00 75459.95 16434.41 84289.38 00:07:59.027 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.027 Verification LBA range: start 0x0 length 0x4ff80 00:07:59.027 Nvme1n1p1 : 5.05 1824.34 7.13 0.00 0.00 69804.38 15526.99 65737.65 00:07:59.027 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.027 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:59.027 Nvme1n1p1 : 5.07 1690.71 6.60 0.00 0.00 75305.25 17341.83 70980.53 00:07:59.027 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.027 Verification LBA range: start 0x0 length 0x4ff7f 00:07:59.027 Nvme1n1p2 : 5.05 1823.19 7.12 0.00 0.00 69717.95 17140.18 62914.56 00:07:59.027 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.027 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:59.027 Nvme1n1p2 : 5.07 1690.15 6.60 0.00 0.00 75201.46 15829.46 64931.05 00:07:59.027 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.027 Verification LBA range: start 0x0 length 0x80000 00:07:59.027 Nvme2n1 : 5.06 1822.71 7.12 0.00 0.00 69619.34 17241.01 59688.17 00:07:59.027 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.027 Verification LBA range: start 0x80000 length 0x80000 00:07:59.027 Nvme2n1 : 5.08 1689.11 6.60 0.00 0.00 75067.86 15930.29 63317.86 00:07:59.027 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.027 Verification LBA range: start 0x0 length 0x80000 00:07:59.027 Nvme2n2 : 5.07 1830.08 7.15 0.00 0.00 69235.44 3780.92 61301.37 00:07:59.027 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.027 Verification LBA range: start 0x80000 length 0x80000 00:07:59.027 Nvme2n2 : 5.08 1688.65 6.60 0.00 0.00 74906.53 15022.87 64931.05 00:07:59.027 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.027 Verification LBA range: start 0x0 length 0x80000 00:07:59.027 Nvme2n3 : 5.08 1839.22 7.18 0.00 0.00 68832.82 6377.16 63721.16 00:07:59.027 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.027 Verification LBA range: start 0x80000 length 0x80000 00:07:59.027 Nvme2n3 : 5.08 1688.21 6.59 0.00 0.00 74749.98 14014.62 66140.95 00:07:59.027 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.027 Verification LBA range: start 0x0 length 0x20000 00:07:59.027 Nvme3n1 : 5.08 1838.19 7.18 0.00 0.00 68737.23 7360.20 66544.25 00:07:59.027 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.027 Verification LBA range: start 0x20000 length 0x20000 00:07:59.027 Nvme3n1 : 5.09 1698.15 6.63 0.00 0.00 74277.07 1852.65 66947.54 00:07:59.027 [2024-11-29T14:15:40.821Z] =================================================================================================================== 00:07:59.027 [2024-11-29T14:15:40.821Z] Total : 24639.07 96.25 0.00 0.00 72094.54 1852.65 84289.38 00:07:59.645 00:07:59.645 real 0m6.333s 00:07:59.645 user 0m11.938s 00:07:59.645 sys 0m0.212s 00:07:59.645 14:15:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.645 14:15:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:59.645 ************************************ 00:07:59.645 END TEST bdev_verify 00:07:59.645 ************************************ 00:07:59.645 14:15:41 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:59.645 14:15:41 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:59.645 14:15:41 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.645 14:15:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:59.645 ************************************ 00:07:59.645 START TEST bdev_verify_big_io 00:07:59.645 ************************************ 00:07:59.645 14:15:41 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:59.645 [2024-11-29 14:15:41.243561] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:59.645 [2024-11-29 14:15:41.243671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74239 ] 00:07:59.645 [2024-11-29 14:15:41.384865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:59.645 [2024-11-29 14:15:41.427383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.645 [2024-11-29 14:15:41.427412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.212 Running I/O for 5 seconds... 00:08:06.291 1530.00 IOPS, 95.62 MiB/s [2024-11-29T14:15:48.649Z] 3129.00 IOPS, 195.56 MiB/s [2024-11-29T14:15:48.649Z] 3599.00 IOPS, 224.94 MiB/s 00:08:06.855 Latency(us) 00:08:06.855 [2024-11-29T14:15:48.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.855 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.855 Verification LBA range: start 0x0 length 0xbd0b 00:08:06.855 Nvme0n1 : 5.79 127.09 7.94 0.00 0.00 963817.29 16837.71 1213121.77 00:08:06.855 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.855 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:06.855 Nvme0n1 : 6.20 56.78 3.55 0.00 0.00 2142447.03 13308.85 2516582.40 00:08:06.855 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.855 Verification LBA range: start 0x0 length 0x4ff8 00:08:06.855 Nvme1n1p1 : 5.79 116.37 7.27 0.00 0.00 1013241.91 95581.74 1690627.15 00:08:06.855 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.855 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:06.855 Nvme1n1p1 : 6.10 80.72 5.05 0.00 0.00 1427625.32 94775.14 1664816.05 00:08:06.855 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.855 Verification LBA range: start 0x0 length 0x4ff7 00:08:06.855 Nvme1n1p2 : 5.79 120.65 7.54 0.00 0.00 962641.69 116149.96 1716438.25 00:08:06.855 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.855 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:06.855 Nvme1n1p2 : 6.10 83.98 5.25 0.00 0.00 1305693.74 62511.26 1335724.50 00:08:06.855 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.855 Verification LBA range: start 0x0 length 0x8000 00:08:06.855 Nvme2n1 : 5.91 135.29 8.46 0.00 0.00 833676.47 84289.38 1045349.61 00:08:06.855 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.855 Verification LBA range: start 0x8000 length 0x8000 00:08:06.855 Nvme2n1 : 6.17 93.84 5.86 0.00 0.00 1104222.83 20669.05 1361535.61 00:08:06.855 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.855 Verification LBA range: start 0x0 length 0x8000 00:08:06.855 Nvme2n2 : 6.02 144.88 9.05 0.00 0.00 759210.28 37708.41 871124.68 00:08:06.855 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.855 Verification LBA range: start 0x8000 length 0x8000 00:08:06.855 Nvme2n2 : 6.31 122.35 7.65 0.00 0.00 821360.81 19257.50 1387346.71 00:08:06.855 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.855 Verification LBA range: start 0x0 length 0x8000 00:08:06.855 Nvme2n3 : 6.02 148.82 9.30 0.00 0.00 718967.84 38918.30 890483.00 00:08:06.855 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.855 Verification LBA range: start 0x8000 length 0x8000 00:08:06.855 Nvme2n3 : 6.49 181.86 11.37 0.00 0.00 530212.36 12653.49 1393799.48 00:08:06.855 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.855 Verification LBA range: start 0x0 length 0x2000 00:08:06.855 Nvme3n1 : 6.07 167.65 10.48 0.00 0.00 620943.69 478.92 909841.33 00:08:06.855 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.855 Verification LBA range: start 0x2000 length 0x2000 00:08:06.855 Nvme3n1 : 6.73 323.36 20.21 0.00 0.00 285726.47 373.37 1426063.36 00:08:06.855 [2024-11-29T14:15:48.649Z] =================================================================================================================== 00:08:06.855 [2024-11-29T14:15:48.649Z] Total : 1903.64 118.98 0.00 0.00 792329.70 373.37 2516582.40 00:08:08.226 00:08:08.226 real 0m8.475s 00:08:08.226 user 0m16.167s 00:08:08.226 sys 0m0.245s 00:08:08.226 14:15:49 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.226 14:15:49 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:08.226 ************************************ 00:08:08.226 END TEST bdev_verify_big_io 00:08:08.226 ************************************ 00:08:08.226 14:15:49 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.226 14:15:49 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:08.226 14:15:49 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.226 14:15:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:08.226 ************************************ 00:08:08.226 START TEST bdev_write_zeroes 00:08:08.226 ************************************ 00:08:08.226 14:15:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.226 [2024-11-29 14:15:49.752486] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:08.226 [2024-11-29 14:15:49.752608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74352 ] 00:08:08.226 [2024-11-29 14:15:49.897423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.226 [2024-11-29 14:15:49.936327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.794 Running I/O for 1 seconds... 00:08:09.727 70336.00 IOPS, 274.75 MiB/s 00:08:09.727 Latency(us) 00:08:09.727 [2024-11-29T14:15:51.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.727 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.727 Nvme0n1 : 1.02 9990.87 39.03 0.00 0.00 12783.34 11393.18 34078.72 00:08:09.727 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.727 Nvme1n1p1 : 1.03 9978.68 38.98 0.00 0.00 12778.79 11090.71 33675.42 00:08:09.727 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.727 Nvme1n1p2 : 1.03 9966.58 38.93 0.00 0.00 12734.29 8721.33 33070.47 00:08:09.727 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.727 Nvme2n1 : 1.03 9955.25 38.89 0.00 0.00 12720.33 8469.27 31255.63 00:08:09.727 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.727 Nvme2n2 : 1.03 9944.07 38.84 0.00 0.00 12706.05 7461.02 30852.33 00:08:09.727 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.727 Nvme2n3 : 1.03 9932.88 38.80 0.00 0.00 12693.52 6956.90 32263.88 00:08:09.727 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.727 Nvme3n1 : 1.03 9921.63 38.76 0.00 0.00 12690.18 6276.33 34280.37 00:08:09.727 [2024-11-29T14:15:51.521Z] =================================================================================================================== 00:08:09.727 [2024-11-29T14:15:51.521Z] Total : 69689.96 272.23 0.00 0.00 12729.50 6276.33 34280.37 00:08:09.985 00:08:09.985 real 0m1.890s 00:08:09.985 user 0m1.597s 00:08:09.985 sys 0m0.183s 00:08:09.985 14:15:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.985 14:15:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:09.985 ************************************ 00:08:09.985 END TEST bdev_write_zeroes 00:08:09.985 ************************************ 00:08:09.985 14:15:51 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.985 14:15:51 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:09.985 14:15:51 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.985 14:15:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:09.985 ************************************ 00:08:09.985 START TEST bdev_json_nonenclosed 00:08:09.985 ************************************ 00:08:09.985 14:15:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.985 [2024-11-29 14:15:51.689866] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:09.985 [2024-11-29 14:15:51.689976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74394 ] 00:08:10.244 [2024-11-29 14:15:51.838987] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.244 [2024-11-29 14:15:51.880872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.244 [2024-11-29 14:15:51.880966] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:10.244 [2024-11-29 14:15:51.880985] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:10.244 [2024-11-29 14:15:51.880997] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.244 00:08:10.244 real 0m0.339s 00:08:10.244 user 0m0.135s 00:08:10.244 sys 0m0.101s 00:08:10.244 14:15:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.244 14:15:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:10.244 ************************************ 00:08:10.244 END TEST bdev_json_nonenclosed 00:08:10.244 ************************************ 00:08:10.244 14:15:51 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:10.244 14:15:51 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:10.244 14:15:51 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.244 14:15:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:10.244 ************************************ 00:08:10.244 START TEST bdev_json_nonarray 00:08:10.244 ************************************ 00:08:10.244 14:15:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:10.503 [2024-11-29 14:15:52.072881] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:10.503 [2024-11-29 14:15:52.073030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74414 ] 00:08:10.503 [2024-11-29 14:15:52.227211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.503 [2024-11-29 14:15:52.268241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.503 [2024-11-29 14:15:52.268350] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:10.503 [2024-11-29 14:15:52.268370] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:10.503 [2024-11-29 14:15:52.268385] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.760 00:08:10.760 real 0m0.351s 00:08:10.760 user 0m0.141s 00:08:10.760 sys 0m0.106s 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:10.760 ************************************ 00:08:10.760 END TEST bdev_json_nonarray 00:08:10.760 ************************************ 00:08:10.760 14:15:52 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:10.760 14:15:52 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:10.760 14:15:52 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:10.760 14:15:52 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.760 14:15:52 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.760 14:15:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:10.760 ************************************ 00:08:10.760 START TEST bdev_gpt_uuid 00:08:10.760 ************************************ 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74434 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74434 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 74434 ']' 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.760 14:15:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:10.760 [2024-11-29 14:15:52.470383] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:10.760 [2024-11-29 14:15:52.470511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74434 ] 00:08:11.018 [2024-11-29 14:15:52.616702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.018 [2024-11-29 14:15:52.659090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.584 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.584 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:08:11.584 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:11.584 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.584 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:11.842 Some configs were skipped because the RPC state that can call them passed over. 00:08:11.842 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.842 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:11.842 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.842 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:12.100 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.100 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:12.100 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.100 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:12.100 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.100 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:12.100 { 00:08:12.100 "name": "Nvme1n1p1", 00:08:12.100 "aliases": [ 00:08:12.100 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:12.100 ], 00:08:12.100 "product_name": "GPT Disk", 00:08:12.100 "block_size": 4096, 00:08:12.100 "num_blocks": 655104, 00:08:12.100 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:12.100 "assigned_rate_limits": { 00:08:12.100 "rw_ios_per_sec": 0, 00:08:12.100 "rw_mbytes_per_sec": 0, 00:08:12.100 "r_mbytes_per_sec": 0, 00:08:12.100 "w_mbytes_per_sec": 0 00:08:12.100 }, 00:08:12.100 "claimed": false, 00:08:12.100 "zoned": false, 00:08:12.100 "supported_io_types": { 00:08:12.100 "read": true, 00:08:12.100 "write": true, 00:08:12.100 "unmap": true, 00:08:12.100 "flush": true, 00:08:12.100 "reset": true, 00:08:12.100 "nvme_admin": false, 00:08:12.100 "nvme_io": false, 00:08:12.100 "nvme_io_md": false, 00:08:12.100 "write_zeroes": true, 00:08:12.100 "zcopy": false, 00:08:12.100 "get_zone_info": false, 00:08:12.100 "zone_management": false, 00:08:12.100 "zone_append": false, 00:08:12.100 "compare": true, 00:08:12.100 "compare_and_write": false, 00:08:12.100 "abort": true, 00:08:12.100 "seek_hole": false, 00:08:12.100 "seek_data": false, 00:08:12.100 "copy": true, 00:08:12.100 "nvme_iov_md": false 00:08:12.100 }, 00:08:12.100 "driver_specific": { 00:08:12.100 "gpt": { 00:08:12.100 "base_bdev": "Nvme1n1", 00:08:12.100 "offset_blocks": 256, 00:08:12.100 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:12.101 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:12.101 "partition_name": "SPDK_TEST_first" 00:08:12.101 } 00:08:12.101 } 00:08:12.101 } 00:08:12.101 ]' 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:12.101 { 00:08:12.101 "name": "Nvme1n1p2", 00:08:12.101 "aliases": [ 00:08:12.101 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:12.101 ], 00:08:12.101 "product_name": "GPT Disk", 00:08:12.101 "block_size": 4096, 00:08:12.101 "num_blocks": 655103, 00:08:12.101 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:12.101 "assigned_rate_limits": { 00:08:12.101 "rw_ios_per_sec": 0, 00:08:12.101 "rw_mbytes_per_sec": 0, 00:08:12.101 "r_mbytes_per_sec": 0, 00:08:12.101 "w_mbytes_per_sec": 0 00:08:12.101 }, 00:08:12.101 "claimed": false, 00:08:12.101 "zoned": false, 00:08:12.101 "supported_io_types": { 00:08:12.101 "read": true, 00:08:12.101 "write": true, 00:08:12.101 "unmap": true, 00:08:12.101 "flush": true, 00:08:12.101 "reset": true, 00:08:12.101 "nvme_admin": false, 00:08:12.101 "nvme_io": false, 00:08:12.101 "nvme_io_md": false, 00:08:12.101 "write_zeroes": true, 00:08:12.101 "zcopy": false, 00:08:12.101 "get_zone_info": false, 00:08:12.101 "zone_management": false, 00:08:12.101 "zone_append": false, 00:08:12.101 "compare": true, 00:08:12.101 "compare_and_write": false, 00:08:12.101 "abort": true, 00:08:12.101 "seek_hole": false, 00:08:12.101 "seek_data": false, 00:08:12.101 "copy": true, 00:08:12.101 "nvme_iov_md": false 00:08:12.101 }, 00:08:12.101 "driver_specific": { 00:08:12.101 "gpt": { 00:08:12.101 "base_bdev": "Nvme1n1", 00:08:12.101 "offset_blocks": 655360, 00:08:12.101 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:12.101 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:12.101 "partition_name": "SPDK_TEST_second" 00:08:12.101 } 00:08:12.101 } 00:08:12.101 } 00:08:12.101 ]' 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74434 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 74434 ']' 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 74434 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74434 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.101 killing process with pid 74434 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74434' 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 74434 00:08:12.101 14:15:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 74434 00:08:12.668 00:08:12.668 real 0m1.827s 00:08:12.668 user 0m1.958s 00:08:12.668 sys 0m0.378s 00:08:12.668 14:15:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.668 14:15:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:12.668 ************************************ 00:08:12.668 END TEST bdev_gpt_uuid 00:08:12.668 ************************************ 00:08:12.668 14:15:54 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:12.668 14:15:54 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:12.668 14:15:54 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:12.668 14:15:54 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:12.668 14:15:54 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:12.668 14:15:54 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:12.668 14:15:54 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:12.668 14:15:54 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:12.668 14:15:54 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:12.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:12.927 Waiting for block devices as requested 00:08:13.185 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:13.185 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:13.185 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:13.185 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:18.450 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:18.450 14:15:59 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:18.450 14:15:59 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:18.708 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:18.708 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:18.708 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:18.708 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:18.708 14:16:00 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:18.708 00:08:18.708 real 0m48.453s 00:08:18.708 user 1m1.953s 00:08:18.708 sys 0m7.557s 00:08:18.708 14:16:00 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.708 ************************************ 00:08:18.708 END TEST blockdev_nvme_gpt 00:08:18.708 ************************************ 00:08:18.708 14:16:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:18.708 14:16:00 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:18.708 14:16:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.708 14:16:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.708 14:16:00 -- common/autotest_common.sh@10 -- # set +x 00:08:18.708 ************************************ 00:08:18.708 START TEST nvme 00:08:18.708 ************************************ 00:08:18.708 14:16:00 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:18.708 * Looking for test storage... 00:08:18.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:18.708 14:16:00 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:18.708 14:16:00 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:08:18.708 14:16:00 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:18.709 14:16:00 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:18.709 14:16:00 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.709 14:16:00 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.709 14:16:00 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.709 14:16:00 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.709 14:16:00 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.709 14:16:00 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.709 14:16:00 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.709 14:16:00 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.709 14:16:00 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.709 14:16:00 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.709 14:16:00 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.709 14:16:00 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:18.709 14:16:00 nvme -- scripts/common.sh@345 -- # : 1 00:08:18.709 14:16:00 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.709 14:16:00 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.709 14:16:00 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:18.709 14:16:00 nvme -- scripts/common.sh@353 -- # local d=1 00:08:18.709 14:16:00 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.709 14:16:00 nvme -- scripts/common.sh@355 -- # echo 1 00:08:18.709 14:16:00 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.709 14:16:00 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:18.709 14:16:00 nvme -- scripts/common.sh@353 -- # local d=2 00:08:18.709 14:16:00 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.709 14:16:00 nvme -- scripts/common.sh@355 -- # echo 2 00:08:18.709 14:16:00 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.709 14:16:00 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.709 14:16:00 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.709 14:16:00 nvme -- scripts/common.sh@368 -- # return 0 00:08:18.709 14:16:00 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.709 14:16:00 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:18.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.709 --rc genhtml_branch_coverage=1 00:08:18.709 --rc genhtml_function_coverage=1 00:08:18.709 --rc genhtml_legend=1 00:08:18.709 --rc geninfo_all_blocks=1 00:08:18.709 --rc geninfo_unexecuted_blocks=1 00:08:18.709 00:08:18.709 ' 00:08:18.709 14:16:00 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:18.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.709 --rc genhtml_branch_coverage=1 00:08:18.709 --rc genhtml_function_coverage=1 00:08:18.709 --rc genhtml_legend=1 00:08:18.709 --rc geninfo_all_blocks=1 00:08:18.709 --rc geninfo_unexecuted_blocks=1 00:08:18.709 00:08:18.709 ' 00:08:18.709 14:16:00 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:18.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.709 --rc genhtml_branch_coverage=1 00:08:18.709 --rc genhtml_function_coverage=1 00:08:18.709 --rc genhtml_legend=1 00:08:18.709 --rc geninfo_all_blocks=1 00:08:18.709 --rc geninfo_unexecuted_blocks=1 00:08:18.709 00:08:18.709 ' 00:08:18.709 14:16:00 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:18.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.709 --rc genhtml_branch_coverage=1 00:08:18.709 --rc genhtml_function_coverage=1 00:08:18.709 --rc genhtml_legend=1 00:08:18.709 --rc geninfo_all_blocks=1 00:08:18.709 --rc geninfo_unexecuted_blocks=1 00:08:18.709 00:08:18.709 ' 00:08:18.709 14:16:00 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:19.275 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:19.840 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.840 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.840 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.840 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.840 14:16:01 nvme -- nvme/nvme.sh@79 -- # uname 00:08:19.841 14:16:01 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:19.841 14:16:01 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:19.841 14:16:01 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:19.841 14:16:01 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:19.841 14:16:01 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:19.841 14:16:01 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:19.841 Waiting for stub to ready for secondary processes... 00:08:19.841 14:16:01 nvme -- common/autotest_common.sh@1071 -- # stubpid=75057 00:08:19.841 14:16:01 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:19.841 14:16:01 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:19.841 14:16:01 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/75057 ]] 00:08:19.841 14:16:01 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:19.841 14:16:01 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:19.841 [2024-11-29 14:16:01.604344] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:19.841 [2024-11-29 14:16:01.604466] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:20.775 [2024-11-29 14:16:02.336483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.775 [2024-11-29 14:16:02.356423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.775 [2024-11-29 14:16:02.356599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.775 [2024-11-29 14:16:02.356683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.775 [2024-11-29 14:16:02.366938] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:20.775 [2024-11-29 14:16:02.366979] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:20.775 [2024-11-29 14:16:02.381589] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:20.775 [2024-11-29 14:16:02.382012] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:20.775 [2024-11-29 14:16:02.384261] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:20.775 [2024-11-29 14:16:02.384721] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:20.775 [2024-11-29 14:16:02.384845] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:20.775 [2024-11-29 14:16:02.386077] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:20.775 [2024-11-29 14:16:02.386384] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:20.775 [2024-11-29 14:16:02.386521] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:20.775 [2024-11-29 14:16:02.388227] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:20.775 [2024-11-29 14:16:02.388486] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:20.775 [2024-11-29 14:16:02.388617] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:20.775 [2024-11-29 14:16:02.388740] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:20.775 [2024-11-29 14:16:02.388889] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:21.034 14:16:02 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:21.034 done. 00:08:21.034 14:16:02 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:21.034 14:16:02 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:21.034 14:16:02 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:21.034 14:16:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.034 14:16:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.034 ************************************ 00:08:21.034 START TEST nvme_reset 00:08:21.034 ************************************ 00:08:21.034 14:16:02 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:21.034 Initializing NVMe Controllers 00:08:21.034 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:21.034 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:21.034 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:21.034 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:21.034 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:21.034 00:08:21.034 real 0m0.194s 00:08:21.034 user 0m0.057s 00:08:21.034 sys 0m0.096s 00:08:21.034 14:16:02 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.034 ************************************ 00:08:21.034 END TEST nvme_reset 00:08:21.034 ************************************ 00:08:21.034 14:16:02 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:21.034 14:16:02 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:21.034 14:16:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.034 14:16:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.034 14:16:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.034 ************************************ 00:08:21.034 START TEST nvme_identify 00:08:21.034 ************************************ 00:08:21.034 14:16:02 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:21.034 14:16:02 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:21.034 14:16:02 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:21.034 14:16:02 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:21.034 14:16:02 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:21.034 14:16:02 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:21.296 14:16:02 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:21.296 14:16:02 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:21.296 14:16:02 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:21.296 14:16:02 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:21.296 14:16:02 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:21.296 14:16:02 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:21.296 14:16:02 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:21.296 [2024-11-29 14:16:03.030158] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 75078 terminated unexpected 00:08:21.296 ===================================================== 00:08:21.296 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:21.296 ===================================================== 00:08:21.296 Controller Capabilities/Features 00:08:21.296 ================================ 00:08:21.296 Vendor ID: 1b36 00:08:21.296 Subsystem Vendor ID: 1af4 00:08:21.296 Serial Number: 12343 00:08:21.296 Model Number: QEMU NVMe Ctrl 00:08:21.296 Firmware Version: 8.0.0 00:08:21.296 Recommended Arb Burst: 6 00:08:21.296 IEEE OUI Identifier: 00 54 52 00:08:21.296 Multi-path I/O 00:08:21.296 May have multiple subsystem ports: No 00:08:21.296 May have multiple controllers: Yes 00:08:21.296 Associated with SR-IOV VF: No 00:08:21.296 Max Data Transfer Size: 524288 00:08:21.296 Max Number of Namespaces: 256 00:08:21.296 Max Number of I/O Queues: 64 00:08:21.296 NVMe Specification Version (VS): 1.4 00:08:21.296 NVMe Specification Version (Identify): 1.4 00:08:21.296 Maximum Queue Entries: 2048 00:08:21.296 Contiguous Queues Required: Yes 00:08:21.296 Arbitration Mechanisms Supported 00:08:21.296 Weighted Round Robin: Not Supported 00:08:21.296 Vendor Specific: Not Supported 00:08:21.296 Reset Timeout: 7500 ms 00:08:21.296 Doorbell Stride: 4 bytes 00:08:21.296 NVM Subsystem Reset: Not Supported 00:08:21.296 Command Sets Supported 00:08:21.296 NVM Command Set: Supported 00:08:21.296 Boot Partition: Not Supported 00:08:21.296 Memory Page Size Minimum: 4096 bytes 00:08:21.296 Memory Page Size Maximum: 65536 bytes 00:08:21.296 Persistent Memory Region: Not Supported 00:08:21.296 Optional Asynchronous Events Supported 00:08:21.296 Namespace Attribute Notices: Supported 00:08:21.296 Firmware Activation Notices: Not Supported 00:08:21.296 ANA Change Notices: Not Supported 00:08:21.296 PLE Aggregate Log Change Notices: Not Supported 00:08:21.296 LBA Status Info Alert Notices: Not Supported 00:08:21.296 EGE Aggregate Log Change Notices: Not Supported 00:08:21.296 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.296 Zone Descriptor Change Notices: Not Supported 00:08:21.296 Discovery Log Change Notices: Not Supported 00:08:21.296 Controller Attributes 00:08:21.296 128-bit Host Identifier: Not Supported 00:08:21.296 Non-Operational Permissive Mode: Not Supported 00:08:21.296 NVM Sets: Not Supported 00:08:21.296 Read Recovery Levels: Not Supported 00:08:21.296 Endurance Groups: Supported 00:08:21.296 Predictable Latency Mode: Not Supported 00:08:21.296 Traffic Based Keep ALive: Not Supported 00:08:21.296 Namespace Granularity: Not Supported 00:08:21.296 SQ Associations: Not Supported 00:08:21.296 UUID List: Not Supported 00:08:21.296 Multi-Domain Subsystem: Not Supported 00:08:21.296 Fixed Capacity Management: Not Supported 00:08:21.296 Variable Capacity Management: Not Supported 00:08:21.296 Delete Endurance Group: Not Supported 00:08:21.296 Delete NVM Set: Not Supported 00:08:21.296 Extended LBA Formats Supported: Supported 00:08:21.296 Flexible Data Placement Supported: Supported 00:08:21.296 00:08:21.296 Controller Memory Buffer Support 00:08:21.296 ================================ 00:08:21.296 Supported: No 00:08:21.296 00:08:21.296 Persistent Memory Region Support 00:08:21.296 ================================ 00:08:21.296 Supported: No 00:08:21.296 00:08:21.296 Admin Command Set Attributes 00:08:21.296 ============================ 00:08:21.296 Security Send/Receive: Not Supported 00:08:21.296 Format NVM: Supported 00:08:21.296 Firmware Activate/Download: Not Supported 00:08:21.296 Namespace Management: Supported 00:08:21.296 Device Self-Test: Not Supported 00:08:21.296 Directives: Supported 00:08:21.296 NVMe-MI: Not Supported 00:08:21.296 Virtualization Management: Not Supported 00:08:21.296 Doorbell Buffer Config: Supported 00:08:21.296 Get LBA Status Capability: Not Supported 00:08:21.296 Command & Feature Lockdown Capability: Not Supported 00:08:21.296 Abort Command Limit: 4 00:08:21.296 Async Event Request Limit: 4 00:08:21.296 Number of Firmware Slots: N/A 00:08:21.296 Firmware Slot 1 Read-Only: N/A 00:08:21.296 Firmware Activation Without Reset: N/A 00:08:21.296 Multiple Update Detection Support: N/A 00:08:21.296 Firmware Update Granularity: No Information Provided 00:08:21.296 Per-Namespace SMART Log: Yes 00:08:21.296 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.296 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:21.296 Command Effects Log Page: Supported 00:08:21.296 Get Log Page Extended Data: Supported 00:08:21.296 Telemetry Log Pages: Not Supported 00:08:21.296 Persistent Event Log Pages: Not Supported 00:08:21.296 Supported Log Pages Log Page: May Support 00:08:21.296 Commands Supported & Effects Log Page: Not Supported 00:08:21.296 Feature Identifiers & Effects Log Page:May Support 00:08:21.296 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.296 Data Area 4 for Telemetry Log: Not Supported 00:08:21.296 Error Log Page Entries Supported: 1 00:08:21.296 Keep Alive: Not Supported 00:08:21.296 00:08:21.296 NVM Command Set Attributes 00:08:21.296 ========================== 00:08:21.296 Submission Queue Entry Size 00:08:21.296 Max: 64 00:08:21.296 Min: 64 00:08:21.296 Completion Queue Entry Size 00:08:21.296 Max: 16 00:08:21.296 Min: 16 00:08:21.296 Number of Namespaces: 256 00:08:21.297 Compare Command: Supported 00:08:21.297 Write Uncorrectable Command: Not Supported 00:08:21.297 Dataset Management Command: Supported 00:08:21.297 Write Zeroes Command: Supported 00:08:21.297 Set Features Save Field: Supported 00:08:21.297 Reservations: Not Supported 00:08:21.297 Timestamp: Supported 00:08:21.297 Copy: Supported 00:08:21.297 Volatile Write Cache: Present 00:08:21.297 Atomic Write Unit (Normal): 1 00:08:21.297 Atomic Write Unit (PFail): 1 00:08:21.297 Atomic Compare & Write Unit: 1 00:08:21.297 Fused Compare & Write: Not Supported 00:08:21.297 Scatter-Gather List 00:08:21.297 SGL Command Set: Supported 00:08:21.297 SGL Keyed: Not Supported 00:08:21.297 SGL Bit Bucket Descriptor: Not Supported 00:08:21.297 SGL Metadata Pointer: Not Supported 00:08:21.297 Oversized SGL: Not Supported 00:08:21.297 SGL Metadata Address: Not Supported 00:08:21.297 SGL Offset: Not Supported 00:08:21.297 Transport SGL Data Block: Not Supported 00:08:21.297 Replay Protected Memory Block: Not Supported 00:08:21.297 00:08:21.297 Firmware Slot Information 00:08:21.297 ========================= 00:08:21.297 Active slot: 1 00:08:21.297 Slot 1 Firmware Revision: 1.0 00:08:21.297 00:08:21.297 00:08:21.297 Commands Supported and Effects 00:08:21.297 ============================== 00:08:21.297 Admin Commands 00:08:21.297 -------------- 00:08:21.297 Delete I/O Submission Queue (00h): Supported 00:08:21.297 Create I/O Submission Queue (01h): Supported 00:08:21.297 Get Log Page (02h): Supported 00:08:21.297 Delete I/O Completion Queue (04h): Supported 00:08:21.297 Create I/O Completion Queue (05h): Supported 00:08:21.297 Identify (06h): Supported 00:08:21.297 Abort (08h): Supported 00:08:21.297 Set Features (09h): Supported 00:08:21.297 Get Features (0Ah): Supported 00:08:21.297 Asynchronous Event Request (0Ch): Supported 00:08:21.297 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.297 Directive Send (19h): Supported 00:08:21.297 Directive Receive (1Ah): Supported 00:08:21.297 Virtualization Management (1Ch): Supported 00:08:21.297 Doorbell Buffer Config (7Ch): Supported 00:08:21.297 Format NVM (80h): Supported LBA-Change 00:08:21.297 I/O Commands 00:08:21.297 ------------ 00:08:21.297 Flush (00h): Supported LBA-Change 00:08:21.297 Write (01h): Supported LBA-Change 00:08:21.297 Read (02h): Supported 00:08:21.297 Compare (05h): Supported 00:08:21.297 Write Zeroes (08h): Supported LBA-Change 00:08:21.297 Dataset Management (09h): Supported LBA-Change 00:08:21.297 Unknown (0Ch): Supported 00:08:21.297 Unknown (12h): Supported 00:08:21.297 Copy (19h): Supported LBA-Change 00:08:21.297 Unknown (1Dh): Supported LBA-Change 00:08:21.297 00:08:21.297 Error Log 00:08:21.297 ========= 00:08:21.297 00:08:21.297 Arbitration 00:08:21.297 =========== 00:08:21.297 Arbitration Burst: no limit 00:08:21.297 00:08:21.297 Power Management 00:08:21.297 ================ 00:08:21.297 Number of Power States: 1 00:08:21.297 Current Power State: Power State #0 00:08:21.297 Power State #0: 00:08:21.297 Max Power: 25.00 W 00:08:21.297 Non-Operational State: Operational 00:08:21.297 Entry Latency: 16 microseconds 00:08:21.297 Exit Latency: 4 microseconds 00:08:21.297 Relative Read Throughput: 0 00:08:21.297 Relative Read Latency: 0 00:08:21.297 Relative Write Throughput: 0 00:08:21.297 Relative Write Latency: 0 00:08:21.297 Idle Power: Not Reported 00:08:21.297 Active Power: Not Reported 00:08:21.297 Non-Operational Permissive Mode: Not Supported 00:08:21.297 00:08:21.297 Health Information 00:08:21.297 ================== 00:08:21.297 Critical Warnings: 00:08:21.297 Available Spare Space: OK 00:08:21.297 Temperature: OK 00:08:21.297 Device Reliability: OK 00:08:21.297 Read Only: No 00:08:21.297 Volatile Memory Backup: OK 00:08:21.297 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.297 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.297 Available Spare: 0% 00:08:21.297 Available Spare Threshold: 0% 00:08:21.297 Life Percentage Used: 0% 00:08:21.297 Data Units Read: 1088 00:08:21.297 Data Units Written: 1017 00:08:21.297 Host Read Commands: 46138 00:08:21.297 Host Write Commands: 45561 00:08:21.297 Controller Busy Time: 0 minutes 00:08:21.297 Power Cycles: 0 00:08:21.297 Power On Hours: 0 hours 00:08:21.297 Unsafe Shutdowns: 0 00:08:21.297 Unrecoverable Media Errors: 0 00:08:21.297 Lifetime Error Log Entries: 0 00:08:21.297 Warning Temperature Time: 0 minutes 00:08:21.297 Critical Temperature Time: 0 minutes 00:08:21.297 00:08:21.297 Number of Queues 00:08:21.297 ================ 00:08:21.297 Number of I/O Submission Queues: 64 00:08:21.297 Number of I/O Completion Queues: 64 00:08:21.297 00:08:21.297 ZNS Specific Controller Data 00:08:21.297 ============================ 00:08:21.297 Zone Append Size Limit: 0 00:08:21.297 00:08:21.297 00:08:21.297 Active Namespaces 00:08:21.297 ================= 00:08:21.297 Namespace ID:1 00:08:21.297 Error Recovery Timeout: Unlimited 00:08:21.297 Command Set Identifier: NVM (00h) 00:08:21.297 Deallocate: Supported 00:08:21.297 Deallocated/Unwritten Error: Supported 00:08:21.297 Deallocated Read Value: All 0x00 00:08:21.297 Deallocate in Write Zeroes: Not Supported 00:08:21.297 Deallocated Guard Field: 0xFFFF 00:08:21.297 Flush: Supported 00:08:21.297 Reservation: Not Supported 00:08:21.297 Namespace Sharing Capabilities: Multiple Controllers 00:08:21.297 Size (in LBAs): 262144 (1GiB) 00:08:21.297 Capacity (in LBAs): 262144 (1GiB) 00:08:21.297 Utilization (in LBAs): 262144 (1GiB) 00:08:21.297 Thin Provisioning: Not Supported 00:08:21.297 Per-NS Atomic Units: No 00:08:21.297 Maximum Single Source Range Length: 128 00:08:21.297 Maximum Copy Length: 128 00:08:21.297 Maximum Source Range Count: 128 00:08:21.297 NGUID/EUI64 Never Reused: No 00:08:21.297 Namespace Write Protected: No 00:08:21.297 Endurance group ID: 1 00:08:21.297 Number of LBA Formats: 8 00:08:21.297 Current LBA Format: LBA Format #04 00:08:21.297 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.297 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.297 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.297 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.297 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.297 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.297 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.297 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.297 00:08:21.297 Get Feature FDP: 00:08:21.297 ================ 00:08:21.297 Enabled: Yes 00:08:21.297 FDP configuration index: 0 00:08:21.297 00:08:21.297 FDP configurations log page 00:08:21.297 =========================== 00:08:21.297 Number of FDP configurations: 1 00:08:21.297 Version: 0 00:08:21.297 Size: 112 00:08:21.297 FDP Configuration Descriptor: 0 00:08:21.297 Descriptor Size: 96 00:08:21.297 Reclaim Group Identifier format: 2 00:08:21.297 FDP Volatile Write Cache: Not Present 00:08:21.297 FDP Configuration: Valid 00:08:21.297 Vendor Specific Size: 0 00:08:21.297 Number of Reclaim Groups: 2 00:08:21.297 Number of Recalim Unit Handles: 8 00:08:21.297 Max Placement Identifiers: 128 00:08:21.297 Number of Namespaces Suppprted: 256 00:08:21.297 Reclaim unit Nominal Size: 6000000 bytes 00:08:21.297 Estimated Reclaim Unit Time Limit: Not Reported 00:08:21.297 RUH Desc #000: RUH Type: Initially Isolated 00:08:21.297 RUH Desc #001: RUH Type: Initially Isolated 00:08:21.297 RUH Desc #002: RUH Type: Initially Isolated 00:08:21.297 RUH Desc #003: RUH Type: Initially Isolated 00:08:21.297 RUH Desc #004: RUH Type: Initially Isolated 00:08:21.297 RUH Desc #005: RUH Type: Initially Isolated 00:08:21.297 RUH Desc #006: RUH Type: Initially Isolated 00:08:21.297 RUH Desc #007: RUH Type: Initially Isolated 00:08:21.297 00:08:21.298 FDP reclaim unit handle usage log page 00:08:21.298 ================================[2024-11-29 14:16:03.032048] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 75078 terminated unexpected 00:08:21.298 ====== 00:08:21.298 Number of Reclaim Unit Handles: 8 00:08:21.298 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:21.298 RUH Usage Desc #001: RUH Attributes: Unused 00:08:21.298 RUH Usage Desc #002: RUH Attributes: Unused 00:08:21.298 RUH Usage Desc #003: RUH Attributes: Unused 00:08:21.298 RUH Usage Desc #004: RUH Attributes: Unused 00:08:21.298 RUH Usage Desc #005: RUH Attributes: Unused 00:08:21.298 RUH Usage Desc #006: RUH Attributes: Unused 00:08:21.298 RUH Usage Desc #007: RUH Attributes: Unused 00:08:21.298 00:08:21.298 FDP statistics log page 00:08:21.298 ======================= 00:08:21.298 Host bytes with metadata written: 629579776 00:08:21.298 Media bytes with metadata written: 632283136 00:08:21.298 Media bytes erased: 0 00:08:21.298 00:08:21.298 FDP events log page 00:08:21.298 =================== 00:08:21.298 Number of FDP events: 0 00:08:21.298 00:08:21.298 NVM Specific Namespace Data 00:08:21.298 =========================== 00:08:21.298 Logical Block Storage Tag Mask: 0 00:08:21.298 Protection Information Capabilities: 00:08:21.298 16b Guard Protection Information Storage Tag Support: No 00:08:21.298 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.298 Storage Tag Check Read Support: No 00:08:21.298 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.298 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.298 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.298 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.298 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.298 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.298 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.298 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.298 ===================================================== 00:08:21.298 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:21.298 ===================================================== 00:08:21.298 Controller Capabilities/Features 00:08:21.298 ================================ 00:08:21.298 Vendor ID: 1b36 00:08:21.298 Subsystem Vendor ID: 1af4 00:08:21.298 Serial Number: 12340 00:08:21.298 Model Number: QEMU NVMe Ctrl 00:08:21.298 Firmware Version: 8.0.0 00:08:21.298 Recommended Arb Burst: 6 00:08:21.298 IEEE OUI Identifier: 00 54 52 00:08:21.298 Multi-path I/O 00:08:21.298 May have multiple subsystem ports: No 00:08:21.298 May have multiple controllers: No 00:08:21.298 Associated with SR-IOV VF: No 00:08:21.298 Max Data Transfer Size: 524288 00:08:21.298 Max Number of Namespaces: 256 00:08:21.298 Max Number of I/O Queues: 64 00:08:21.298 NVMe Specification Version (VS): 1.4 00:08:21.298 NVMe Specification Version (Identify): 1.4 00:08:21.298 Maximum Queue Entries: 2048 00:08:21.298 Contiguous Queues Required: Yes 00:08:21.298 Arbitration Mechanisms Supported 00:08:21.298 Weighted Round Robin: Not Supported 00:08:21.298 Vendor Specific: Not Supported 00:08:21.298 Reset Timeout: 7500 ms 00:08:21.298 Doorbell Stride: 4 bytes 00:08:21.298 NVM Subsystem Reset: Not Supported 00:08:21.298 Command Sets Supported 00:08:21.298 NVM Command Set: Supported 00:08:21.298 Boot Partition: Not Supported 00:08:21.298 Memory Page Size Minimum: 4096 bytes 00:08:21.298 Memory Page Size Maximum: 65536 bytes 00:08:21.298 Persistent Memory Region: Not Supported 00:08:21.298 Optional Asynchronous Events Supported 00:08:21.298 Namespace Attribute Notices: Supported 00:08:21.298 Firmware Activation Notices: Not Supported 00:08:21.298 ANA Change Notices: Not Supported 00:08:21.298 PLE Aggregate Log Change Notices: Not Supported 00:08:21.298 LBA Status Info Alert Notices: Not Supported 00:08:21.298 EGE Aggregate Log Change Notices: Not Supported 00:08:21.298 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.298 Zone Descriptor Change Notices: Not Supported 00:08:21.298 Discovery Log Change Notices: Not Supported 00:08:21.298 Controller Attributes 00:08:21.298 128-bit Host Identifier: Not Supported 00:08:21.298 Non-Operational Permissive Mode: Not Supported 00:08:21.298 NVM Sets: Not Supported 00:08:21.298 Read Recovery Levels: Not Supported 00:08:21.298 Endurance Groups: Not Supported 00:08:21.298 Predictable Latency Mode: Not Supported 00:08:21.298 Traffic Based Keep ALive: Not Supported 00:08:21.298 Namespace Granularity: Not Supported 00:08:21.298 SQ Associations: Not Supported 00:08:21.298 UUID List: Not Supported 00:08:21.298 Multi-Domain Subsystem: Not Supported 00:08:21.298 Fixed Capacity Management: Not Supported 00:08:21.298 Variable Capacity Management: Not Supported 00:08:21.298 Delete Endurance Group: Not Supported 00:08:21.298 Delete NVM Set: Not Supported 00:08:21.298 Extended LBA Formats Supported: Supported 00:08:21.298 Flexible Data Placement Supported: Not Supported 00:08:21.298 00:08:21.298 Controller Memory Buffer Support 00:08:21.298 ================================ 00:08:21.298 Supported: No 00:08:21.298 00:08:21.298 Persistent Memory Region Support 00:08:21.298 ================================ 00:08:21.298 Supported: No 00:08:21.298 00:08:21.298 Admin Command Set Attributes 00:08:21.298 ============================ 00:08:21.298 Security Send/Receive: Not Supported 00:08:21.298 Format NVM: Supported 00:08:21.298 Firmware Activate/Download: Not Supported 00:08:21.298 Namespace Management: Supported 00:08:21.298 Device Self-Test: Not Supported 00:08:21.298 Directives: Supported 00:08:21.298 NVMe-MI: Not Supported 00:08:21.298 Virtualization Management: Not Supported 00:08:21.298 Doorbell Buffer Config: Supported 00:08:21.298 Get LBA Status Capability: Not Supported 00:08:21.298 Command & Feature Lockdown Capability: Not Supported 00:08:21.298 Abort Command Limit: 4 00:08:21.298 Async Event Request Limit: 4 00:08:21.298 Number of Firmware Slots: N/A 00:08:21.298 Firmware Slot 1 Read-Only: N/A 00:08:21.298 Firmware Activation Without Reset: N/A 00:08:21.298 Multiple Update Detection Support: N/A 00:08:21.298 Firmware Update Granularity: No Information Provided 00:08:21.298 Per-Namespace SMART Log: Yes 00:08:21.298 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.298 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:21.298 Command Effects Log Page: Supported 00:08:21.298 Get Log Page Extended Data: Supported 00:08:21.298 Telemetry Log Pages: Not Supported 00:08:21.298 Persistent Event Log Pages: Not Supported 00:08:21.298 Supported Log Pages Log Page: May Support 00:08:21.298 Commands Supported & Effects Log Page: Not Supported 00:08:21.298 Feature Identifiers & Effects Log Page:May Support 00:08:21.298 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.298 Data Area 4 for Telemetry Log: Not Supported 00:08:21.298 Error Log Page Entries Supported: 1 00:08:21.298 Keep Alive: Not Supported 00:08:21.298 00:08:21.298 NVM Command Set Attributes 00:08:21.298 ========================== 00:08:21.298 Submission Queue Entry Size 00:08:21.298 Max: 64 00:08:21.298 Min: 64 00:08:21.298 Completion Queue Entry Size 00:08:21.298 Max: 16 00:08:21.298 Min: 16 00:08:21.298 Number of Namespaces: 256 00:08:21.298 Compare Command: Supported 00:08:21.298 Write Uncorrectable Command: Not Supported 00:08:21.298 Dataset Management Command: Supported 00:08:21.298 Write Zeroes Command: Supported 00:08:21.298 Set Features Save Field: Supported 00:08:21.298 Reservations: Not Supported 00:08:21.298 Timestamp: Supported 00:08:21.298 Copy: Supported 00:08:21.298 Volatile Write Cache: Present 00:08:21.298 Atomic Write Unit (Normal): 1 00:08:21.298 Atomic Write Unit (PFail): 1 00:08:21.298 Atomic Compare & Write Unit: 1 00:08:21.298 Fused Compare & Write: Not Supported 00:08:21.298 Scatter-Gather List 00:08:21.298 SGL Command Set: Supported 00:08:21.298 SGL Keyed: Not Supported 00:08:21.298 SGL Bit Bucket Descriptor: Not Supported 00:08:21.298 SGL Metadata Pointer: Not Supported 00:08:21.298 Oversized SGL: Not Supported 00:08:21.298 SGL Metadata Address: Not Supported 00:08:21.298 SGL Offset: Not Supported 00:08:21.298 Transport SGL Data Block: Not Supported 00:08:21.298 Replay Protected Memory Block: Not Supported 00:08:21.298 00:08:21.299 Firmware Slot Information 00:08:21.299 ========================= 00:08:21.299 Active slot: 1 00:08:21.299 Slot 1 Firmware Revision: 1.0 00:08:21.299 00:08:21.299 00:08:21.299 Commands Supported and Effects 00:08:21.299 ============================== 00:08:21.299 Admin Commands 00:08:21.299 -------------- 00:08:21.299 Delete I/O Submission Queue (00h): Supported 00:08:21.299 Create I/O Submission Queue (01h): Supported 00:08:21.299 Get Log Page (02h): Supported 00:08:21.299 Delete I/O Completion Queue (04h): Supported 00:08:21.299 Create I/O Completion Queue (05h): Supported 00:08:21.299 Identify (06h): Supported 00:08:21.299 Abort (08h): Supported 00:08:21.299 Set Features (09h): Supported 00:08:21.299 Get Features (0Ah): Supported 00:08:21.299 Asynchronous Event Request (0Ch): Supported 00:08:21.299 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.299 Directive Send (19h): Supported 00:08:21.299 Directive Receive (1Ah): Supported 00:08:21.299 Virtualization Management (1Ch): Supported 00:08:21.299 Doorbell Buffer Config (7Ch): Supported 00:08:21.299 Format NVM (80h): Supported LBA-Change 00:08:21.299 I/O Commands 00:08:21.299 ------------ 00:08:21.299 Flush (00h): Supported LBA-Change 00:08:21.299 Write (01h): Supported LBA-Change 00:08:21.299 Read (02h): Supported 00:08:21.299 Compare (05h): Supported 00:08:21.299 Write Zeroes (08h): Supported LBA-Change 00:08:21.299 Dataset Management (09h): Supported LBA-Change 00:08:21.299 Unknown (0Ch): Supported 00:08:21.299 Unknown (12h): Supported 00:08:21.299 Copy (19h): Supported LBA-Change 00:08:21.299 Unknown (1Dh): Supported LBA-Change 00:08:21.299 00:08:21.299 Error Log 00:08:21.299 ========= 00:08:21.299 00:08:21.299 Arbitration 00:08:21.299 =========== 00:08:21.299 Arbitration Burst: no limit 00:08:21.299 00:08:21.299 Power Management 00:08:21.299 ================ 00:08:21.299 Number of Power States: 1 00:08:21.299 Current Power State: Power State #0 00:08:21.299 Power State #0: 00:08:21.299 Max Power: 25.00 W 00:08:21.299 Non-Operational State: Operational 00:08:21.299 Entry Latency: 16 microseconds 00:08:21.299 Exit Latency: 4 microseconds 00:08:21.299 Relative Read Throughput: 0 00:08:21.299 Relative Read Latency: 0 00:08:21.299 Relative Write Throughput: 0 00:08:21.299 Relative Write Latency: 0 00:08:21.299 Idle Power: Not Reported 00:08:21.299 Active Power: Not Reported 00:08:21.299 Non-Operational Permissive Mode: Not Supported 00:08:21.299 00:08:21.299 Health Information 00:08:21.299 ================== 00:08:21.299 Critical Warnings: 00:08:21.299 Available Spare Space: OK 00:08:21.299 Temperature: OK 00:08:21.299 Device Reliability: OK 00:08:21.299 Read Only: No 00:08:21.299 Volatile Memory Backup: OK 00:08:21.299 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.299 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.299 Available Spare: 0% 00:08:21.299 Available Spare Threshold: 0% 00:08:21.299 Life Percentage Used: 0% 00:08:21.299 Data Units Read: 715 00:08:21.299 Data Units Written: 643 00:08:21.299 Host Read Commands: 42863 00:08:21.299 Host Write Commands: 42649 00:08:21.299 Controller Busy Time: 0 minutes 00:08:21.299 Power Cycles: 0 00:08:21.299 Power On Hours: 0 hours 00:08:21.299 Unsafe Shutdowns: 0 00:08:21.299 Unrecoverable Media Errors: 0 00:08:21.299 Lifetime Error Log Entries: 0 00:08:21.299 Warning Temperature Time: 0 minutes 00:08:21.299 Critical Temperature Time: 0 minutes 00:08:21.299 00:08:21.299 Number of Queues 00:08:21.299 ================ 00:08:21.299 Number of I/O Submission Queues: 64 00:08:21.299 Number of I/O Completion Queues: 64 00:08:21.299 00:08:21.299 ZNS Specific Controller Data 00:08:21.299 ============================ 00:08:21.299 Zone Append Size Limit: 0 00:08:21.299 00:08:21.299 00:08:21.299 Active Namespaces 00:08:21.299 ================= 00:08:21.299 Namespace ID:1 00:08:21.299 Error Recovery Timeout: Unlimited 00:08:21.299 Command Set Identifier: NVM (00h) 00:08:21.299 Deallocate: Supported 00:08:21.299 Deallocated/Unwritten Error: Supported 00:08:21.299 Deallocated Read Value: All 0x00 00:08:21.299 Deallocate in Write Zeroes: Not Supported 00:08:21.299 Deallocated Guard Field: 0xFFFF 00:08:21.299 Flush: Supported 00:08:21.299 Reservation: Not Supported 00:08:21.299 Metadata Transferred as: Separate Metadata Buffer 00:08:21.299 Namespace Sharing Capabilities: Private 00:08:21.299 Size (in LBAs): 1548666 (5GiB) 00:08:21.299 Capacity (in LBAs): 1548666 (5GiB) 00:08:21.299 Utilization (in LBAs): 1548666 (5GiB) 00:08:21.299 Thin Provisioning: Not Supported 00:08:21.299 Per-NS Atomic Units: No 00:08:21.299 Maximum Single Source Range Length: 128 00:08:21.299 Maximum Copy Length: 128 00:08:21.299 Maximum Source Range Count: 128 00:08:21.299 NGUID/EUI64 Never Reused: No 00:08:21.299 Namespace Write Protected: No 00:08:21.299 Number of LBA Formats: 8 00:08:21.299 Current LBA Format: [2024-11-29 14:16:03.032752] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 75078 terminated unexpected 00:08:21.299 LBA Format #07 00:08:21.299 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.299 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.299 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.299 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.299 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.299 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.299 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.299 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.299 00:08:21.299 NVM Specific Namespace Data 00:08:21.299 =========================== 00:08:21.299 Logical Block Storage Tag Mask: 0 00:08:21.299 Protection Information Capabilities: 00:08:21.299 16b Guard Protection Information Storage Tag Support: No 00:08:21.299 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.299 Storage Tag Check Read Support: No 00:08:21.299 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.299 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.299 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.299 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.299 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.299 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.299 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.299 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.299 ===================================================== 00:08:21.299 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:21.299 ===================================================== 00:08:21.299 Controller Capabilities/Features 00:08:21.299 ================================ 00:08:21.299 Vendor ID: 1b36 00:08:21.299 Subsystem Vendor ID: 1af4 00:08:21.299 Serial Number: 12341 00:08:21.299 Model Number: QEMU NVMe Ctrl 00:08:21.299 Firmware Version: 8.0.0 00:08:21.299 Recommended Arb Burst: 6 00:08:21.299 IEEE OUI Identifier: 00 54 52 00:08:21.299 Multi-path I/O 00:08:21.299 May have multiple subsystem ports: No 00:08:21.299 May have multiple controllers: No 00:08:21.299 Associated with SR-IOV VF: No 00:08:21.299 Max Data Transfer Size: 524288 00:08:21.299 Max Number of Namespaces: 256 00:08:21.299 Max Number of I/O Queues: 64 00:08:21.300 NVMe Specification Version (VS): 1.4 00:08:21.300 NVMe Specification Version (Identify): 1.4 00:08:21.300 Maximum Queue Entries: 2048 00:08:21.300 Contiguous Queues Required: Yes 00:08:21.300 Arbitration Mechanisms Supported 00:08:21.300 Weighted Round Robin: Not Supported 00:08:21.300 Vendor Specific: Not Supported 00:08:21.300 Reset Timeout: 7500 ms 00:08:21.300 Doorbell Stride: 4 bytes 00:08:21.300 NVM Subsystem Reset: Not Supported 00:08:21.300 Command Sets Supported 00:08:21.300 NVM Command Set: Supported 00:08:21.300 Boot Partition: Not Supported 00:08:21.300 Memory Page Size Minimum: 4096 bytes 00:08:21.300 Memory Page Size Maximum: 65536 bytes 00:08:21.300 Persistent Memory Region: Not Supported 00:08:21.300 Optional Asynchronous Events Supported 00:08:21.300 Namespace Attribute Notices: Supported 00:08:21.300 Firmware Activation Notices: Not Supported 00:08:21.300 ANA Change Notices: Not Supported 00:08:21.300 PLE Aggregate Log Change Notices: Not Supported 00:08:21.300 LBA Status Info Alert Notices: Not Supported 00:08:21.300 EGE Aggregate Log Change Notices: Not Supported 00:08:21.300 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.300 Zone Descriptor Change Notices: Not Supported 00:08:21.300 Discovery Log Change Notices: Not Supported 00:08:21.300 Controller Attributes 00:08:21.300 128-bit Host Identifier: Not Supported 00:08:21.300 Non-Operational Permissive Mode: Not Supported 00:08:21.300 NVM Sets: Not Supported 00:08:21.300 Read Recovery Levels: Not Supported 00:08:21.300 Endurance Groups: Not Supported 00:08:21.300 Predictable Latency Mode: Not Supported 00:08:21.300 Traffic Based Keep ALive: Not Supported 00:08:21.300 Namespace Granularity: Not Supported 00:08:21.300 SQ Associations: Not Supported 00:08:21.300 UUID List: Not Supported 00:08:21.300 Multi-Domain Subsystem: Not Supported 00:08:21.300 Fixed Capacity Management: Not Supported 00:08:21.300 Variable Capacity Management: Not Supported 00:08:21.300 Delete Endurance Group: Not Supported 00:08:21.300 Delete NVM Set: Not Supported 00:08:21.300 Extended LBA Formats Supported: Supported 00:08:21.300 Flexible Data Placement Supported: Not Supported 00:08:21.300 00:08:21.300 Controller Memory Buffer Support 00:08:21.300 ================================ 00:08:21.300 Supported: No 00:08:21.300 00:08:21.300 Persistent Memory Region Support 00:08:21.300 ================================ 00:08:21.300 Supported: No 00:08:21.300 00:08:21.300 Admin Command Set Attributes 00:08:21.300 ============================ 00:08:21.300 Security Send/Receive: Not Supported 00:08:21.300 Format NVM: Supported 00:08:21.300 Firmware Activate/Download: Not Supported 00:08:21.300 Namespace Management: Supported 00:08:21.300 Device Self-Test: Not Supported 00:08:21.300 Directives: Supported 00:08:21.300 NVMe-MI: Not Supported 00:08:21.300 Virtualization Management: Not Supported 00:08:21.300 Doorbell Buffer Config: Supported 00:08:21.300 Get LBA Status Capability: Not Supported 00:08:21.300 Command & Feature Lockdown Capability: Not Supported 00:08:21.300 Abort Command Limit: 4 00:08:21.300 Async Event Request Limit: 4 00:08:21.300 Number of Firmware Slots: N/A 00:08:21.300 Firmware Slot 1 Read-Only: N/A 00:08:21.300 Firmware Activation Without Reset: N/A 00:08:21.300 Multiple Update Detection Support: N/A 00:08:21.300 Firmware Update Granularity: No Information Provided 00:08:21.300 Per-Namespace SMART Log: Yes 00:08:21.300 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.300 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:21.300 Command Effects Log Page: Supported 00:08:21.300 Get Log Page Extended Data: Supported 00:08:21.300 Telemetry Log Pages: Not Supported 00:08:21.300 Persistent Event Log Pages: Not Supported 00:08:21.300 Supported Log Pages Log Page: May Support 00:08:21.300 Commands Supported & Effects Log Page: Not Supported 00:08:21.300 Feature Identifiers & Effects Log Page:May Support 00:08:21.300 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.300 Data Area 4 for Telemetry Log: Not Supported 00:08:21.300 Error Log Page Entries Supported: 1 00:08:21.300 Keep Alive: Not Supported 00:08:21.300 00:08:21.300 NVM Command Set Attributes 00:08:21.300 ========================== 00:08:21.300 Submission Queue Entry Size 00:08:21.300 Max: 64 00:08:21.300 Min: 64 00:08:21.300 Completion Queue Entry Size 00:08:21.300 Max: 16 00:08:21.300 Min: 16 00:08:21.300 Number of Namespaces: 256 00:08:21.300 Compare Command: Supported 00:08:21.300 Write Uncorrectable Command: Not Supported 00:08:21.300 Dataset Management Command: Supported 00:08:21.300 Write Zeroes Command: Supported 00:08:21.300 Set Features Save Field: Supported 00:08:21.300 Reservations: Not Supported 00:08:21.300 Timestamp: Supported 00:08:21.300 Copy: Supported 00:08:21.300 Volatile Write Cache: Present 00:08:21.300 Atomic Write Unit (Normal): 1 00:08:21.300 Atomic Write Unit (PFail): 1 00:08:21.300 Atomic Compare & Write Unit: 1 00:08:21.300 Fused Compare & Write: Not Supported 00:08:21.300 Scatter-Gather List 00:08:21.300 SGL Command Set: Supported 00:08:21.300 SGL Keyed: Not Supported 00:08:21.300 SGL Bit Bucket Descriptor: Not Supported 00:08:21.300 SGL Metadata Pointer: Not Supported 00:08:21.300 Oversized SGL: Not Supported 00:08:21.300 SGL Metadata Address: Not Supported 00:08:21.300 SGL Offset: Not Supported 00:08:21.300 Transport SGL Data Block: Not Supported 00:08:21.300 Replay Protected Memory Block: Not Supported 00:08:21.300 00:08:21.300 Firmware Slot Information 00:08:21.300 ========================= 00:08:21.300 Active slot: 1 00:08:21.300 Slot 1 Firmware Revision: 1.0 00:08:21.300 00:08:21.300 00:08:21.300 Commands Supported and Effects 00:08:21.300 ============================== 00:08:21.300 Admin Commands 00:08:21.300 -------------- 00:08:21.300 Delete I/O Submission Queue (00h): Supported 00:08:21.300 Create I/O Submission Queue (01h): Supported 00:08:21.300 Get Log Page (02h): Supported 00:08:21.300 Delete I/O Completion Queue (04h): Supported 00:08:21.300 Create I/O Completion Queue (05h): Supported 00:08:21.300 Identify (06h): Supported 00:08:21.300 Abort (08h): Supported 00:08:21.300 Set Features (09h): Supported 00:08:21.300 Get Features (0Ah): Supported 00:08:21.300 Asynchronous Event Request (0Ch): Supported 00:08:21.300 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.300 Directive Send (19h): Supported 00:08:21.300 Directive Receive (1Ah): Supported 00:08:21.300 Virtualization Management (1Ch): Supported 00:08:21.300 Doorbell Buffer Config (7Ch): Supported 00:08:21.300 Format NVM (80h): Supported LBA-Change 00:08:21.300 I/O Commands 00:08:21.300 ------------ 00:08:21.300 Flush (00h): Supported LBA-Change 00:08:21.300 Write (01h): Supported LBA-Change 00:08:21.300 Read (02h): Supported 00:08:21.300 Compare (05h): Supported 00:08:21.300 Write Zeroes (08h): Supported LBA-Change 00:08:21.300 Dataset Management (09h): Supported LBA-Change 00:08:21.300 Unknown (0Ch): Supported 00:08:21.300 Unknown (12h): Supported 00:08:21.300 Copy (19h): Supported LBA-Change 00:08:21.300 Unknown (1Dh): Supported LBA-Change 00:08:21.300 00:08:21.300 Error Log 00:08:21.300 ========= 00:08:21.300 00:08:21.300 Arbitration 00:08:21.300 =========== 00:08:21.300 Arbitration Burst: no limit 00:08:21.300 00:08:21.300 Power Management 00:08:21.300 ================ 00:08:21.300 Number of Power States: 1 00:08:21.300 Current Power State: Power State #0 00:08:21.300 Power State #0: 00:08:21.300 Max Power: 25.00 W 00:08:21.300 Non-Operational State: Operational 00:08:21.300 Entry Latency: 16 microseconds 00:08:21.300 Exit Latency: 4 microseconds 00:08:21.300 Relative Read Throughput: 0 00:08:21.300 Relative Read Latency: 0 00:08:21.300 Relative Write Throughput: 0 00:08:21.300 Relative Write Latency: 0 00:08:21.300 Idle Power: Not Reported 00:08:21.300 Active Power: Not Reported 00:08:21.300 Non-Operational Permissive Mode: Not Supported 00:08:21.300 00:08:21.300 Health Information 00:08:21.300 ================== 00:08:21.300 Critical Warnings: 00:08:21.300 Available Spare Space: OK 00:08:21.300 Temperature: OK 00:08:21.300 Device Reliability: OK 00:08:21.300 Read Only: No 00:08:21.300 Volatile Memory Backup: OK 00:08:21.300 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.300 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.300 Available Spare: 0% 00:08:21.301 Available Spare Threshold: 0% 00:08:21.301 Life Percentage Used: 0% 00:08:21.301 Data Units Read: 1089 00:08:21.301 Data Units Written: 950 00:08:21.301 Host Read Commands: 63102 00:08:21.301 Host Write Commands: 61795 00:08:21.301 Controller Busy Time: 0 minutes 00:08:21.301 Power Cycles: 0 00:08:21.301 Power On Hours: 0 hours 00:08:21.301 Unsafe Shutdowns: 0 00:08:21.301 Unrecoverable Media Errors: 0 00:08:21.301 Lifetime Error Log Entries: 0 00:08:21.301 Warning Temperature Time: 0 minutes 00:08:21.301 Critical Temperature Time: 0 minutes 00:08:21.301 00:08:21.301 Number of Queues 00:08:21.301 ================ 00:08:21.301 Number of I/O Submission Queues: 64 00:08:21.301 Number of I/O Completion Queues: 64 00:08:21.301 00:08:21.301 ZNS Specific Controller Data 00:08:21.301 ============================ 00:08:21.301 Zone Append Size Limit: 0 00:08:21.301 00:08:21.301 00:08:21.301 Active Namespaces 00:08:21.301 ================= 00:08:21.301 Namespace ID:1 00:08:21.301 Error Recovery Timeout: Unlimited 00:08:21.301 Command Set Identifier: NVM (00h) 00:08:21.301 Deallocate: Supported 00:08:21.301 Deallocated/Unwritten Error: Supported 00:08:21.301 Deallocated Read Value: All 0x00 00:08:21.301 Deallocate in Write Zeroes: Not Supported 00:08:21.301 Deallocated Guard Field: 0xFFFF 00:08:21.301 Flush: Supported 00:08:21.301 Reservation: Not Supported 00:08:21.301 Namespace Sharing Capabilities: Private 00:08:21.301 Size (in LBAs): 1310720 (5GiB) 00:08:21.301 Capacity (in LBAs): 1310720 (5GiB) 00:08:21.301 Utilization (in LBAs): 1310720 (5GiB) 00:08:21.301 Thin Provisioning: Not Supported 00:08:21.301 Per-NS Atomic Units: No 00:08:21.301 Maximum Single Source Range Length: 128 00:08:21.301 Maximum Copy Length: 128 00:08:21.301 Maximum Source Range Count: 128 00:08:21.301 NGUID/EUI64 Never Reused: No 00:08:21.301 Namespace Write Protected: No 00:08:21.301 Number of LBA Formats: 8 00:08:21.301 Current LBA Format: LBA Format #04 00:08:21.301 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.301 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.301 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.301 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.301 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.301 LBA For[2024-11-29 14:16:03.033754] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 75078 terminated unexpected 00:08:21.301 mat #05: Data Size: 4096 Metadata Size: 8 00:08:21.301 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.301 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.301 00:08:21.301 NVM Specific Namespace Data 00:08:21.301 =========================== 00:08:21.301 Logical Block Storage Tag Mask: 0 00:08:21.301 Protection Information Capabilities: 00:08:21.301 16b Guard Protection Information Storage Tag Support: No 00:08:21.301 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.301 Storage Tag Check Read Support: No 00:08:21.301 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.301 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.301 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.301 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.301 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.301 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.301 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.301 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.301 ===================================================== 00:08:21.301 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:21.301 ===================================================== 00:08:21.301 Controller Capabilities/Features 00:08:21.301 ================================ 00:08:21.301 Vendor ID: 1b36 00:08:21.301 Subsystem Vendor ID: 1af4 00:08:21.301 Serial Number: 12342 00:08:21.301 Model Number: QEMU NVMe Ctrl 00:08:21.301 Firmware Version: 8.0.0 00:08:21.301 Recommended Arb Burst: 6 00:08:21.301 IEEE OUI Identifier: 00 54 52 00:08:21.301 Multi-path I/O 00:08:21.301 May have multiple subsystem ports: No 00:08:21.301 May have multiple controllers: No 00:08:21.301 Associated with SR-IOV VF: No 00:08:21.301 Max Data Transfer Size: 524288 00:08:21.301 Max Number of Namespaces: 256 00:08:21.301 Max Number of I/O Queues: 64 00:08:21.301 NVMe Specification Version (VS): 1.4 00:08:21.301 NVMe Specification Version (Identify): 1.4 00:08:21.301 Maximum Queue Entries: 2048 00:08:21.301 Contiguous Queues Required: Yes 00:08:21.301 Arbitration Mechanisms Supported 00:08:21.301 Weighted Round Robin: Not Supported 00:08:21.301 Vendor Specific: Not Supported 00:08:21.301 Reset Timeout: 7500 ms 00:08:21.301 Doorbell Stride: 4 bytes 00:08:21.301 NVM Subsystem Reset: Not Supported 00:08:21.301 Command Sets Supported 00:08:21.301 NVM Command Set: Supported 00:08:21.301 Boot Partition: Not Supported 00:08:21.301 Memory Page Size Minimum: 4096 bytes 00:08:21.301 Memory Page Size Maximum: 65536 bytes 00:08:21.301 Persistent Memory Region: Not Supported 00:08:21.301 Optional Asynchronous Events Supported 00:08:21.301 Namespace Attribute Notices: Supported 00:08:21.301 Firmware Activation Notices: Not Supported 00:08:21.301 ANA Change Notices: Not Supported 00:08:21.301 PLE Aggregate Log Change Notices: Not Supported 00:08:21.301 LBA Status Info Alert Notices: Not Supported 00:08:21.301 EGE Aggregate Log Change Notices: Not Supported 00:08:21.301 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.301 Zone Descriptor Change Notices: Not Supported 00:08:21.301 Discovery Log Change Notices: Not Supported 00:08:21.301 Controller Attributes 00:08:21.301 128-bit Host Identifier: Not Supported 00:08:21.301 Non-Operational Permissive Mode: Not Supported 00:08:21.301 NVM Sets: Not Supported 00:08:21.301 Read Recovery Levels: Not Supported 00:08:21.301 Endurance Groups: Not Supported 00:08:21.301 Predictable Latency Mode: Not Supported 00:08:21.301 Traffic Based Keep ALive: Not Supported 00:08:21.301 Namespace Granularity: Not Supported 00:08:21.301 SQ Associations: Not Supported 00:08:21.301 UUID List: Not Supported 00:08:21.301 Multi-Domain Subsystem: Not Supported 00:08:21.301 Fixed Capacity Management: Not Supported 00:08:21.302 Variable Capacity Management: Not Supported 00:08:21.302 Delete Endurance Group: Not Supported 00:08:21.302 Delete NVM Set: Not Supported 00:08:21.302 Extended LBA Formats Supported: Supported 00:08:21.302 Flexible Data Placement Supported: Not Supported 00:08:21.302 00:08:21.302 Controller Memory Buffer Support 00:08:21.302 ================================ 00:08:21.302 Supported: No 00:08:21.302 00:08:21.302 Persistent Memory Region Support 00:08:21.302 ================================ 00:08:21.302 Supported: No 00:08:21.302 00:08:21.302 Admin Command Set Attributes 00:08:21.302 ============================ 00:08:21.302 Security Send/Receive: Not Supported 00:08:21.302 Format NVM: Supported 00:08:21.302 Firmware Activate/Download: Not Supported 00:08:21.302 Namespace Management: Supported 00:08:21.302 Device Self-Test: Not Supported 00:08:21.302 Directives: Supported 00:08:21.302 NVMe-MI: Not Supported 00:08:21.302 Virtualization Management: Not Supported 00:08:21.302 Doorbell Buffer Config: Supported 00:08:21.302 Get LBA Status Capability: Not Supported 00:08:21.302 Command & Feature Lockdown Capability: Not Supported 00:08:21.302 Abort Command Limit: 4 00:08:21.302 Async Event Request Limit: 4 00:08:21.302 Number of Firmware Slots: N/A 00:08:21.302 Firmware Slot 1 Read-Only: N/A 00:08:21.302 Firmware Activation Without Reset: N/A 00:08:21.302 Multiple Update Detection Support: N/A 00:08:21.302 Firmware Update Granularity: No Information Provided 00:08:21.302 Per-Namespace SMART Log: Yes 00:08:21.302 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.302 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:21.302 Command Effects Log Page: Supported 00:08:21.302 Get Log Page Extended Data: Supported 00:08:21.302 Telemetry Log Pages: Not Supported 00:08:21.302 Persistent Event Log Pages: Not Supported 00:08:21.302 Supported Log Pages Log Page: May Support 00:08:21.302 Commands Supported & Effects Log Page: Not Supported 00:08:21.302 Feature Identifiers & Effects Log Page:May Support 00:08:21.302 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.302 Data Area 4 for Telemetry Log: Not Supported 00:08:21.302 Error Log Page Entries Supported: 1 00:08:21.302 Keep Alive: Not Supported 00:08:21.302 00:08:21.302 NVM Command Set Attributes 00:08:21.302 ========================== 00:08:21.302 Submission Queue Entry Size 00:08:21.302 Max: 64 00:08:21.302 Min: 64 00:08:21.302 Completion Queue Entry Size 00:08:21.302 Max: 16 00:08:21.302 Min: 16 00:08:21.302 Number of Namespaces: 256 00:08:21.302 Compare Command: Supported 00:08:21.302 Write Uncorrectable Command: Not Supported 00:08:21.302 Dataset Management Command: Supported 00:08:21.302 Write Zeroes Command: Supported 00:08:21.302 Set Features Save Field: Supported 00:08:21.302 Reservations: Not Supported 00:08:21.302 Timestamp: Supported 00:08:21.302 Copy: Supported 00:08:21.302 Volatile Write Cache: Present 00:08:21.302 Atomic Write Unit (Normal): 1 00:08:21.302 Atomic Write Unit (PFail): 1 00:08:21.302 Atomic Compare & Write Unit: 1 00:08:21.302 Fused Compare & Write: Not Supported 00:08:21.302 Scatter-Gather List 00:08:21.302 SGL Command Set: Supported 00:08:21.302 SGL Keyed: Not Supported 00:08:21.302 SGL Bit Bucket Descriptor: Not Supported 00:08:21.302 SGL Metadata Pointer: Not Supported 00:08:21.302 Oversized SGL: Not Supported 00:08:21.302 SGL Metadata Address: Not Supported 00:08:21.302 SGL Offset: Not Supported 00:08:21.302 Transport SGL Data Block: Not Supported 00:08:21.302 Replay Protected Memory Block: Not Supported 00:08:21.302 00:08:21.302 Firmware Slot Information 00:08:21.302 ========================= 00:08:21.302 Active slot: 1 00:08:21.302 Slot 1 Firmware Revision: 1.0 00:08:21.302 00:08:21.302 00:08:21.302 Commands Supported and Effects 00:08:21.302 ============================== 00:08:21.302 Admin Commands 00:08:21.302 -------------- 00:08:21.302 Delete I/O Submission Queue (00h): Supported 00:08:21.302 Create I/O Submission Queue (01h): Supported 00:08:21.302 Get Log Page (02h): Supported 00:08:21.302 Delete I/O Completion Queue (04h): Supported 00:08:21.302 Create I/O Completion Queue (05h): Supported 00:08:21.302 Identify (06h): Supported 00:08:21.302 Abort (08h): Supported 00:08:21.302 Set Features (09h): Supported 00:08:21.302 Get Features (0Ah): Supported 00:08:21.302 Asynchronous Event Request (0Ch): Supported 00:08:21.302 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.302 Directive Send (19h): Supported 00:08:21.302 Directive Receive (1Ah): Supported 00:08:21.302 Virtualization Management (1Ch): Supported 00:08:21.302 Doorbell Buffer Config (7Ch): Supported 00:08:21.302 Format NVM (80h): Supported LBA-Change 00:08:21.302 I/O Commands 00:08:21.302 ------------ 00:08:21.302 Flush (00h): Supported LBA-Change 00:08:21.302 Write (01h): Supported LBA-Change 00:08:21.302 Read (02h): Supported 00:08:21.302 Compare (05h): Supported 00:08:21.302 Write Zeroes (08h): Supported LBA-Change 00:08:21.302 Dataset Management (09h): Supported LBA-Change 00:08:21.302 Unknown (0Ch): Supported 00:08:21.302 Unknown (12h): Supported 00:08:21.302 Copy (19h): Supported LBA-Change 00:08:21.302 Unknown (1Dh): Supported LBA-Change 00:08:21.302 00:08:21.302 Error Log 00:08:21.302 ========= 00:08:21.302 00:08:21.302 Arbitration 00:08:21.302 =========== 00:08:21.302 Arbitration Burst: no limit 00:08:21.302 00:08:21.302 Power Management 00:08:21.302 ================ 00:08:21.302 Number of Power States: 1 00:08:21.302 Current Power State: Power State #0 00:08:21.302 Power State #0: 00:08:21.302 Max Power: 25.00 W 00:08:21.302 Non-Operational State: Operational 00:08:21.302 Entry Latency: 16 microseconds 00:08:21.302 Exit Latency: 4 microseconds 00:08:21.302 Relative Read Throughput: 0 00:08:21.302 Relative Read Latency: 0 00:08:21.302 Relative Write Throughput: 0 00:08:21.302 Relative Write Latency: 0 00:08:21.302 Idle Power: Not Reported 00:08:21.302 Active Power: Not Reported 00:08:21.302 Non-Operational Permissive Mode: Not Supported 00:08:21.302 00:08:21.302 Health Information 00:08:21.302 ================== 00:08:21.302 Critical Warnings: 00:08:21.302 Available Spare Space: OK 00:08:21.302 Temperature: OK 00:08:21.302 Device Reliability: OK 00:08:21.302 Read Only: No 00:08:21.302 Volatile Memory Backup: OK 00:08:21.302 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.302 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.302 Available Spare: 0% 00:08:21.302 Available Spare Threshold: 0% 00:08:21.302 Life Percentage Used: 0% 00:08:21.302 Data Units Read: 2437 00:08:21.302 Data Units Written: 2224 00:08:21.302 Host Read Commands: 131598 00:08:21.302 Host Write Commands: 129867 00:08:21.302 Controller Busy Time: 0 minutes 00:08:21.302 Power Cycles: 0 00:08:21.302 Power On Hours: 0 hours 00:08:21.302 Unsafe Shutdowns: 0 00:08:21.302 Unrecoverable Media Errors: 0 00:08:21.302 Lifetime Error Log Entries: 0 00:08:21.302 Warning Temperature Time: 0 minutes 00:08:21.302 Critical Temperature Time: 0 minutes 00:08:21.302 00:08:21.302 Number of Queues 00:08:21.302 ================ 00:08:21.302 Number of I/O Submission Queues: 64 00:08:21.302 Number of I/O Completion Queues: 64 00:08:21.302 00:08:21.302 ZNS Specific Controller Data 00:08:21.302 ============================ 00:08:21.302 Zone Append Size Limit: 0 00:08:21.302 00:08:21.302 00:08:21.302 Active Namespaces 00:08:21.302 ================= 00:08:21.302 Namespace ID:1 00:08:21.302 Error Recovery Timeout: Unlimited 00:08:21.302 Command Set Identifier: NVM (00h) 00:08:21.302 Deallocate: Supported 00:08:21.302 Deallocated/Unwritten Error: Supported 00:08:21.302 Deallocated Read Value: All 0x00 00:08:21.302 Deallocate in Write Zeroes: Not Supported 00:08:21.302 Deallocated Guard Field: 0xFFFF 00:08:21.302 Flush: Supported 00:08:21.302 Reservation: Not Supported 00:08:21.302 Namespace Sharing Capabilities: Private 00:08:21.302 Size (in LBAs): 1048576 (4GiB) 00:08:21.302 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.302 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.302 Thin Provisioning: Not Supported 00:08:21.302 Per-NS Atomic Units: No 00:08:21.302 Maximum Single Source Range Length: 128 00:08:21.302 Maximum Copy Length: 128 00:08:21.302 Maximum Source Range Count: 128 00:08:21.302 NGUID/EUI64 Never Reused: No 00:08:21.302 Namespace Write Protected: No 00:08:21.302 Number of LBA Formats: 8 00:08:21.302 Current LBA Format: LBA Format #04 00:08:21.302 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.302 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.302 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.302 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.303 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.303 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.303 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.303 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.303 00:08:21.303 NVM Specific Namespace Data 00:08:21.303 =========================== 00:08:21.303 Logical Block Storage Tag Mask: 0 00:08:21.303 Protection Information Capabilities: 00:08:21.303 16b Guard Protection Information Storage Tag Support: No 00:08:21.303 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.303 Storage Tag Check Read Support: No 00:08:21.303 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Namespace ID:2 00:08:21.303 Error Recovery Timeout: Unlimited 00:08:21.303 Command Set Identifier: NVM (00h) 00:08:21.303 Deallocate: Supported 00:08:21.303 Deallocated/Unwritten Error: Supported 00:08:21.303 Deallocated Read Value: All 0x00 00:08:21.303 Deallocate in Write Zeroes: Not Supported 00:08:21.303 Deallocated Guard Field: 0xFFFF 00:08:21.303 Flush: Supported 00:08:21.303 Reservation: Not Supported 00:08:21.303 Namespace Sharing Capabilities: Private 00:08:21.303 Size (in LBAs): 1048576 (4GiB) 00:08:21.303 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.303 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.303 Thin Provisioning: Not Supported 00:08:21.303 Per-NS Atomic Units: No 00:08:21.303 Maximum Single Source Range Length: 128 00:08:21.303 Maximum Copy Length: 128 00:08:21.303 Maximum Source Range Count: 128 00:08:21.303 NGUID/EUI64 Never Reused: No 00:08:21.303 Namespace Write Protected: No 00:08:21.303 Number of LBA Formats: 8 00:08:21.303 Current LBA Format: LBA Format #04 00:08:21.303 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.303 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.303 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.303 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.303 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.303 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.303 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.303 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.303 00:08:21.303 NVM Specific Namespace Data 00:08:21.303 =========================== 00:08:21.303 Logical Block Storage Tag Mask: 0 00:08:21.303 Protection Information Capabilities: 00:08:21.303 16b Guard Protection Information Storage Tag Support: No 00:08:21.303 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.303 Storage Tag Check Read Support: No 00:08:21.303 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Namespace ID:3 00:08:21.303 Error Recovery Timeout: Unlimited 00:08:21.303 Command Set Identifier: NVM (00h) 00:08:21.303 Deallocate: Supported 00:08:21.303 Deallocated/Unwritten Error: Supported 00:08:21.303 Deallocated Read Value: All 0x00 00:08:21.303 Deallocate in Write Zeroes: Not Supported 00:08:21.303 Deallocated Guard Field: 0xFFFF 00:08:21.303 Flush: Supported 00:08:21.303 Reservation: Not Supported 00:08:21.303 Namespace Sharing Capabilities: Private 00:08:21.303 Size (in LBAs): 1048576 (4GiB) 00:08:21.303 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.303 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.303 Thin Provisioning: Not Supported 00:08:21.303 Per-NS Atomic Units: No 00:08:21.303 Maximum Single Source Range Length: 128 00:08:21.303 Maximum Copy Length: 128 00:08:21.303 Maximum Source Range Count: 128 00:08:21.303 NGUID/EUI64 Never Reused: No 00:08:21.303 Namespace Write Protected: No 00:08:21.303 Number of LBA Formats: 8 00:08:21.303 Current LBA Format: LBA Format #04 00:08:21.303 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.303 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.303 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.303 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.303 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.303 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.303 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.303 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.303 00:08:21.303 NVM Specific Namespace Data 00:08:21.303 =========================== 00:08:21.303 Logical Block Storage Tag Mask: 0 00:08:21.303 Protection Information Capabilities: 00:08:21.303 16b Guard Protection Information Storage Tag Support: No 00:08:21.303 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.303 Storage Tag Check Read Support: No 00:08:21.303 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 14:16:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:21.303 14:16:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:21.562 ===================================================== 00:08:21.562 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:21.562 ===================================================== 00:08:21.562 Controller Capabilities/Features 00:08:21.562 ================================ 00:08:21.562 Vendor ID: 1b36 00:08:21.562 Subsystem Vendor ID: 1af4 00:08:21.562 Serial Number: 12340 00:08:21.562 Model Number: QEMU NVMe Ctrl 00:08:21.562 Firmware Version: 8.0.0 00:08:21.562 Recommended Arb Burst: 6 00:08:21.562 IEEE OUI Identifier: 00 54 52 00:08:21.562 Multi-path I/O 00:08:21.562 May have multiple subsystem ports: No 00:08:21.562 May have multiple controllers: No 00:08:21.562 Associated with SR-IOV VF: No 00:08:21.562 Max Data Transfer Size: 524288 00:08:21.562 Max Number of Namespaces: 256 00:08:21.562 Max Number of I/O Queues: 64 00:08:21.562 NVMe Specification Version (VS): 1.4 00:08:21.562 NVMe Specification Version (Identify): 1.4 00:08:21.562 Maximum Queue Entries: 2048 00:08:21.562 Contiguous Queues Required: Yes 00:08:21.562 Arbitration Mechanisms Supported 00:08:21.562 Weighted Round Robin: Not Supported 00:08:21.562 Vendor Specific: Not Supported 00:08:21.562 Reset Timeout: 7500 ms 00:08:21.562 Doorbell Stride: 4 bytes 00:08:21.562 NVM Subsystem Reset: Not Supported 00:08:21.563 Command Sets Supported 00:08:21.563 NVM Command Set: Supported 00:08:21.563 Boot Partition: Not Supported 00:08:21.563 Memory Page Size Minimum: 4096 bytes 00:08:21.563 Memory Page Size Maximum: 65536 bytes 00:08:21.563 Persistent Memory Region: Not Supported 00:08:21.563 Optional Asynchronous Events Supported 00:08:21.563 Namespace Attribute Notices: Supported 00:08:21.563 Firmware Activation Notices: Not Supported 00:08:21.563 ANA Change Notices: Not Supported 00:08:21.563 PLE Aggregate Log Change Notices: Not Supported 00:08:21.563 LBA Status Info Alert Notices: Not Supported 00:08:21.563 EGE Aggregate Log Change Notices: Not Supported 00:08:21.563 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.563 Zone Descriptor Change Notices: Not Supported 00:08:21.563 Discovery Log Change Notices: Not Supported 00:08:21.563 Controller Attributes 00:08:21.563 128-bit Host Identifier: Not Supported 00:08:21.563 Non-Operational Permissive Mode: Not Supported 00:08:21.563 NVM Sets: Not Supported 00:08:21.563 Read Recovery Levels: Not Supported 00:08:21.563 Endurance Groups: Not Supported 00:08:21.563 Predictable Latency Mode: Not Supported 00:08:21.563 Traffic Based Keep ALive: Not Supported 00:08:21.563 Namespace Granularity: Not Supported 00:08:21.563 SQ Associations: Not Supported 00:08:21.563 UUID List: Not Supported 00:08:21.563 Multi-Domain Subsystem: Not Supported 00:08:21.563 Fixed Capacity Management: Not Supported 00:08:21.563 Variable Capacity Management: Not Supported 00:08:21.563 Delete Endurance Group: Not Supported 00:08:21.563 Delete NVM Set: Not Supported 00:08:21.563 Extended LBA Formats Supported: Supported 00:08:21.563 Flexible Data Placement Supported: Not Supported 00:08:21.563 00:08:21.563 Controller Memory Buffer Support 00:08:21.563 ================================ 00:08:21.563 Supported: No 00:08:21.563 00:08:21.563 Persistent Memory Region Support 00:08:21.563 ================================ 00:08:21.563 Supported: No 00:08:21.563 00:08:21.563 Admin Command Set Attributes 00:08:21.563 ============================ 00:08:21.563 Security Send/Receive: Not Supported 00:08:21.563 Format NVM: Supported 00:08:21.563 Firmware Activate/Download: Not Supported 00:08:21.563 Namespace Management: Supported 00:08:21.563 Device Self-Test: Not Supported 00:08:21.563 Directives: Supported 00:08:21.563 NVMe-MI: Not Supported 00:08:21.563 Virtualization Management: Not Supported 00:08:21.563 Doorbell Buffer Config: Supported 00:08:21.563 Get LBA Status Capability: Not Supported 00:08:21.563 Command & Feature Lockdown Capability: Not Supported 00:08:21.563 Abort Command Limit: 4 00:08:21.563 Async Event Request Limit: 4 00:08:21.563 Number of Firmware Slots: N/A 00:08:21.563 Firmware Slot 1 Read-Only: N/A 00:08:21.563 Firmware Activation Without Reset: N/A 00:08:21.563 Multiple Update Detection Support: N/A 00:08:21.563 Firmware Update Granularity: No Information Provided 00:08:21.563 Per-Namespace SMART Log: Yes 00:08:21.563 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.563 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:21.563 Command Effects Log Page: Supported 00:08:21.563 Get Log Page Extended Data: Supported 00:08:21.563 Telemetry Log Pages: Not Supported 00:08:21.563 Persistent Event Log Pages: Not Supported 00:08:21.563 Supported Log Pages Log Page: May Support 00:08:21.563 Commands Supported & Effects Log Page: Not Supported 00:08:21.563 Feature Identifiers & Effects Log Page:May Support 00:08:21.563 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.563 Data Area 4 for Telemetry Log: Not Supported 00:08:21.563 Error Log Page Entries Supported: 1 00:08:21.563 Keep Alive: Not Supported 00:08:21.563 00:08:21.563 NVM Command Set Attributes 00:08:21.563 ========================== 00:08:21.563 Submission Queue Entry Size 00:08:21.563 Max: 64 00:08:21.563 Min: 64 00:08:21.563 Completion Queue Entry Size 00:08:21.563 Max: 16 00:08:21.563 Min: 16 00:08:21.563 Number of Namespaces: 256 00:08:21.563 Compare Command: Supported 00:08:21.563 Write Uncorrectable Command: Not Supported 00:08:21.563 Dataset Management Command: Supported 00:08:21.563 Write Zeroes Command: Supported 00:08:21.563 Set Features Save Field: Supported 00:08:21.563 Reservations: Not Supported 00:08:21.563 Timestamp: Supported 00:08:21.563 Copy: Supported 00:08:21.563 Volatile Write Cache: Present 00:08:21.563 Atomic Write Unit (Normal): 1 00:08:21.563 Atomic Write Unit (PFail): 1 00:08:21.563 Atomic Compare & Write Unit: 1 00:08:21.563 Fused Compare & Write: Not Supported 00:08:21.563 Scatter-Gather List 00:08:21.563 SGL Command Set: Supported 00:08:21.563 SGL Keyed: Not Supported 00:08:21.563 SGL Bit Bucket Descriptor: Not Supported 00:08:21.563 SGL Metadata Pointer: Not Supported 00:08:21.563 Oversized SGL: Not Supported 00:08:21.563 SGL Metadata Address: Not Supported 00:08:21.563 SGL Offset: Not Supported 00:08:21.563 Transport SGL Data Block: Not Supported 00:08:21.563 Replay Protected Memory Block: Not Supported 00:08:21.563 00:08:21.563 Firmware Slot Information 00:08:21.563 ========================= 00:08:21.563 Active slot: 1 00:08:21.563 Slot 1 Firmware Revision: 1.0 00:08:21.563 00:08:21.563 00:08:21.563 Commands Supported and Effects 00:08:21.563 ============================== 00:08:21.563 Admin Commands 00:08:21.563 -------------- 00:08:21.563 Delete I/O Submission Queue (00h): Supported 00:08:21.563 Create I/O Submission Queue (01h): Supported 00:08:21.563 Get Log Page (02h): Supported 00:08:21.563 Delete I/O Completion Queue (04h): Supported 00:08:21.563 Create I/O Completion Queue (05h): Supported 00:08:21.563 Identify (06h): Supported 00:08:21.563 Abort (08h): Supported 00:08:21.563 Set Features (09h): Supported 00:08:21.563 Get Features (0Ah): Supported 00:08:21.563 Asynchronous Event Request (0Ch): Supported 00:08:21.563 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.563 Directive Send (19h): Supported 00:08:21.563 Directive Receive (1Ah): Supported 00:08:21.563 Virtualization Management (1Ch): Supported 00:08:21.563 Doorbell Buffer Config (7Ch): Supported 00:08:21.563 Format NVM (80h): Supported LBA-Change 00:08:21.563 I/O Commands 00:08:21.563 ------------ 00:08:21.563 Flush (00h): Supported LBA-Change 00:08:21.563 Write (01h): Supported LBA-Change 00:08:21.563 Read (02h): Supported 00:08:21.563 Compare (05h): Supported 00:08:21.563 Write Zeroes (08h): Supported LBA-Change 00:08:21.563 Dataset Management (09h): Supported LBA-Change 00:08:21.563 Unknown (0Ch): Supported 00:08:21.563 Unknown (12h): Supported 00:08:21.563 Copy (19h): Supported LBA-Change 00:08:21.563 Unknown (1Dh): Supported LBA-Change 00:08:21.563 00:08:21.563 Error Log 00:08:21.563 ========= 00:08:21.563 00:08:21.563 Arbitration 00:08:21.563 =========== 00:08:21.563 Arbitration Burst: no limit 00:08:21.563 00:08:21.563 Power Management 00:08:21.563 ================ 00:08:21.563 Number of Power States: 1 00:08:21.563 Current Power State: Power State #0 00:08:21.563 Power State #0: 00:08:21.563 Max Power: 25.00 W 00:08:21.563 Non-Operational State: Operational 00:08:21.563 Entry Latency: 16 microseconds 00:08:21.563 Exit Latency: 4 microseconds 00:08:21.563 Relative Read Throughput: 0 00:08:21.563 Relative Read Latency: 0 00:08:21.563 Relative Write Throughput: 0 00:08:21.563 Relative Write Latency: 0 00:08:21.563 Idle Power: Not Reported 00:08:21.563 Active Power: Not Reported 00:08:21.563 Non-Operational Permissive Mode: Not Supported 00:08:21.563 00:08:21.563 Health Information 00:08:21.563 ================== 00:08:21.563 Critical Warnings: 00:08:21.563 Available Spare Space: OK 00:08:21.563 Temperature: OK 00:08:21.563 Device Reliability: OK 00:08:21.563 Read Only: No 00:08:21.563 Volatile Memory Backup: OK 00:08:21.563 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.563 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.563 Available Spare: 0% 00:08:21.563 Available Spare Threshold: 0% 00:08:21.563 Life Percentage Used: 0% 00:08:21.563 Data Units Read: 715 00:08:21.563 Data Units Written: 643 00:08:21.563 Host Read Commands: 42863 00:08:21.563 Host Write Commands: 42649 00:08:21.563 Controller Busy Time: 0 minutes 00:08:21.563 Power Cycles: 0 00:08:21.563 Power On Hours: 0 hours 00:08:21.563 Unsafe Shutdowns: 0 00:08:21.563 Unrecoverable Media Errors: 0 00:08:21.563 Lifetime Error Log Entries: 0 00:08:21.563 Warning Temperature Time: 0 minutes 00:08:21.563 Critical Temperature Time: 0 minutes 00:08:21.563 00:08:21.563 Number of Queues 00:08:21.563 ================ 00:08:21.563 Number of I/O Submission Queues: 64 00:08:21.563 Number of I/O Completion Queues: 64 00:08:21.563 00:08:21.564 ZNS Specific Controller Data 00:08:21.564 ============================ 00:08:21.564 Zone Append Size Limit: 0 00:08:21.564 00:08:21.564 00:08:21.564 Active Namespaces 00:08:21.564 ================= 00:08:21.564 Namespace ID:1 00:08:21.564 Error Recovery Timeout: Unlimited 00:08:21.564 Command Set Identifier: NVM (00h) 00:08:21.564 Deallocate: Supported 00:08:21.564 Deallocated/Unwritten Error: Supported 00:08:21.564 Deallocated Read Value: All 0x00 00:08:21.564 Deallocate in Write Zeroes: Not Supported 00:08:21.564 Deallocated Guard Field: 0xFFFF 00:08:21.564 Flush: Supported 00:08:21.564 Reservation: Not Supported 00:08:21.564 Metadata Transferred as: Separate Metadata Buffer 00:08:21.564 Namespace Sharing Capabilities: Private 00:08:21.564 Size (in LBAs): 1548666 (5GiB) 00:08:21.564 Capacity (in LBAs): 1548666 (5GiB) 00:08:21.564 Utilization (in LBAs): 1548666 (5GiB) 00:08:21.564 Thin Provisioning: Not Supported 00:08:21.564 Per-NS Atomic Units: No 00:08:21.564 Maximum Single Source Range Length: 128 00:08:21.564 Maximum Copy Length: 128 00:08:21.564 Maximum Source Range Count: 128 00:08:21.564 NGUID/EUI64 Never Reused: No 00:08:21.564 Namespace Write Protected: No 00:08:21.564 Number of LBA Formats: 8 00:08:21.564 Current LBA Format: LBA Format #07 00:08:21.564 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.564 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.564 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.564 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.564 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.564 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.564 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.564 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.564 00:08:21.564 NVM Specific Namespace Data 00:08:21.564 =========================== 00:08:21.564 Logical Block Storage Tag Mask: 0 00:08:21.564 Protection Information Capabilities: 00:08:21.564 16b Guard Protection Information Storage Tag Support: No 00:08:21.564 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.564 Storage Tag Check Read Support: No 00:08:21.564 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.564 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.564 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.564 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.564 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.564 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.564 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.564 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.564 14:16:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:21.564 14:16:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:21.823 ===================================================== 00:08:21.823 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:21.823 ===================================================== 00:08:21.823 Controller Capabilities/Features 00:08:21.823 ================================ 00:08:21.823 Vendor ID: 1b36 00:08:21.823 Subsystem Vendor ID: 1af4 00:08:21.823 Serial Number: 12341 00:08:21.823 Model Number: QEMU NVMe Ctrl 00:08:21.823 Firmware Version: 8.0.0 00:08:21.823 Recommended Arb Burst: 6 00:08:21.823 IEEE OUI Identifier: 00 54 52 00:08:21.823 Multi-path I/O 00:08:21.823 May have multiple subsystem ports: No 00:08:21.823 May have multiple controllers: No 00:08:21.823 Associated with SR-IOV VF: No 00:08:21.823 Max Data Transfer Size: 524288 00:08:21.823 Max Number of Namespaces: 256 00:08:21.823 Max Number of I/O Queues: 64 00:08:21.823 NVMe Specification Version (VS): 1.4 00:08:21.823 NVMe Specification Version (Identify): 1.4 00:08:21.824 Maximum Queue Entries: 2048 00:08:21.824 Contiguous Queues Required: Yes 00:08:21.824 Arbitration Mechanisms Supported 00:08:21.824 Weighted Round Robin: Not Supported 00:08:21.824 Vendor Specific: Not Supported 00:08:21.824 Reset Timeout: 7500 ms 00:08:21.824 Doorbell Stride: 4 bytes 00:08:21.824 NVM Subsystem Reset: Not Supported 00:08:21.824 Command Sets Supported 00:08:21.824 NVM Command Set: Supported 00:08:21.824 Boot Partition: Not Supported 00:08:21.824 Memory Page Size Minimum: 4096 bytes 00:08:21.824 Memory Page Size Maximum: 65536 bytes 00:08:21.824 Persistent Memory Region: Not Supported 00:08:21.824 Optional Asynchronous Events Supported 00:08:21.824 Namespace Attribute Notices: Supported 00:08:21.824 Firmware Activation Notices: Not Supported 00:08:21.824 ANA Change Notices: Not Supported 00:08:21.824 PLE Aggregate Log Change Notices: Not Supported 00:08:21.824 LBA Status Info Alert Notices: Not Supported 00:08:21.824 EGE Aggregate Log Change Notices: Not Supported 00:08:21.824 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.824 Zone Descriptor Change Notices: Not Supported 00:08:21.824 Discovery Log Change Notices: Not Supported 00:08:21.824 Controller Attributes 00:08:21.824 128-bit Host Identifier: Not Supported 00:08:21.824 Non-Operational Permissive Mode: Not Supported 00:08:21.824 NVM Sets: Not Supported 00:08:21.824 Read Recovery Levels: Not Supported 00:08:21.824 Endurance Groups: Not Supported 00:08:21.824 Predictable Latency Mode: Not Supported 00:08:21.824 Traffic Based Keep ALive: Not Supported 00:08:21.824 Namespace Granularity: Not Supported 00:08:21.824 SQ Associations: Not Supported 00:08:21.824 UUID List: Not Supported 00:08:21.824 Multi-Domain Subsystem: Not Supported 00:08:21.824 Fixed Capacity Management: Not Supported 00:08:21.824 Variable Capacity Management: Not Supported 00:08:21.824 Delete Endurance Group: Not Supported 00:08:21.824 Delete NVM Set: Not Supported 00:08:21.824 Extended LBA Formats Supported: Supported 00:08:21.824 Flexible Data Placement Supported: Not Supported 00:08:21.824 00:08:21.824 Controller Memory Buffer Support 00:08:21.824 ================================ 00:08:21.824 Supported: No 00:08:21.824 00:08:21.824 Persistent Memory Region Support 00:08:21.824 ================================ 00:08:21.824 Supported: No 00:08:21.824 00:08:21.824 Admin Command Set Attributes 00:08:21.824 ============================ 00:08:21.824 Security Send/Receive: Not Supported 00:08:21.824 Format NVM: Supported 00:08:21.824 Firmware Activate/Download: Not Supported 00:08:21.824 Namespace Management: Supported 00:08:21.824 Device Self-Test: Not Supported 00:08:21.824 Directives: Supported 00:08:21.824 NVMe-MI: Not Supported 00:08:21.824 Virtualization Management: Not Supported 00:08:21.824 Doorbell Buffer Config: Supported 00:08:21.824 Get LBA Status Capability: Not Supported 00:08:21.824 Command & Feature Lockdown Capability: Not Supported 00:08:21.824 Abort Command Limit: 4 00:08:21.824 Async Event Request Limit: 4 00:08:21.824 Number of Firmware Slots: N/A 00:08:21.824 Firmware Slot 1 Read-Only: N/A 00:08:21.824 Firmware Activation Without Reset: N/A 00:08:21.824 Multiple Update Detection Support: N/A 00:08:21.824 Firmware Update Granularity: No Information Provided 00:08:21.824 Per-Namespace SMART Log: Yes 00:08:21.824 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.824 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:21.824 Command Effects Log Page: Supported 00:08:21.824 Get Log Page Extended Data: Supported 00:08:21.824 Telemetry Log Pages: Not Supported 00:08:21.824 Persistent Event Log Pages: Not Supported 00:08:21.824 Supported Log Pages Log Page: May Support 00:08:21.824 Commands Supported & Effects Log Page: Not Supported 00:08:21.824 Feature Identifiers & Effects Log Page:May Support 00:08:21.824 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.824 Data Area 4 for Telemetry Log: Not Supported 00:08:21.824 Error Log Page Entries Supported: 1 00:08:21.824 Keep Alive: Not Supported 00:08:21.824 00:08:21.824 NVM Command Set Attributes 00:08:21.824 ========================== 00:08:21.824 Submission Queue Entry Size 00:08:21.824 Max: 64 00:08:21.824 Min: 64 00:08:21.824 Completion Queue Entry Size 00:08:21.824 Max: 16 00:08:21.824 Min: 16 00:08:21.824 Number of Namespaces: 256 00:08:21.824 Compare Command: Supported 00:08:21.824 Write Uncorrectable Command: Not Supported 00:08:21.824 Dataset Management Command: Supported 00:08:21.824 Write Zeroes Command: Supported 00:08:21.824 Set Features Save Field: Supported 00:08:21.824 Reservations: Not Supported 00:08:21.824 Timestamp: Supported 00:08:21.824 Copy: Supported 00:08:21.824 Volatile Write Cache: Present 00:08:21.824 Atomic Write Unit (Normal): 1 00:08:21.824 Atomic Write Unit (PFail): 1 00:08:21.824 Atomic Compare & Write Unit: 1 00:08:21.824 Fused Compare & Write: Not Supported 00:08:21.824 Scatter-Gather List 00:08:21.824 SGL Command Set: Supported 00:08:21.824 SGL Keyed: Not Supported 00:08:21.824 SGL Bit Bucket Descriptor: Not Supported 00:08:21.824 SGL Metadata Pointer: Not Supported 00:08:21.824 Oversized SGL: Not Supported 00:08:21.824 SGL Metadata Address: Not Supported 00:08:21.824 SGL Offset: Not Supported 00:08:21.824 Transport SGL Data Block: Not Supported 00:08:21.824 Replay Protected Memory Block: Not Supported 00:08:21.824 00:08:21.824 Firmware Slot Information 00:08:21.824 ========================= 00:08:21.824 Active slot: 1 00:08:21.824 Slot 1 Firmware Revision: 1.0 00:08:21.824 00:08:21.824 00:08:21.824 Commands Supported and Effects 00:08:21.824 ============================== 00:08:21.824 Admin Commands 00:08:21.824 -------------- 00:08:21.824 Delete I/O Submission Queue (00h): Supported 00:08:21.824 Create I/O Submission Queue (01h): Supported 00:08:21.824 Get Log Page (02h): Supported 00:08:21.824 Delete I/O Completion Queue (04h): Supported 00:08:21.824 Create I/O Completion Queue (05h): Supported 00:08:21.824 Identify (06h): Supported 00:08:21.824 Abort (08h): Supported 00:08:21.824 Set Features (09h): Supported 00:08:21.824 Get Features (0Ah): Supported 00:08:21.824 Asynchronous Event Request (0Ch): Supported 00:08:21.824 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.824 Directive Send (19h): Supported 00:08:21.824 Directive Receive (1Ah): Supported 00:08:21.824 Virtualization Management (1Ch): Supported 00:08:21.824 Doorbell Buffer Config (7Ch): Supported 00:08:21.824 Format NVM (80h): Supported LBA-Change 00:08:21.824 I/O Commands 00:08:21.824 ------------ 00:08:21.824 Flush (00h): Supported LBA-Change 00:08:21.824 Write (01h): Supported LBA-Change 00:08:21.824 Read (02h): Supported 00:08:21.824 Compare (05h): Supported 00:08:21.824 Write Zeroes (08h): Supported LBA-Change 00:08:21.824 Dataset Management (09h): Supported LBA-Change 00:08:21.824 Unknown (0Ch): Supported 00:08:21.824 Unknown (12h): Supported 00:08:21.824 Copy (19h): Supported LBA-Change 00:08:21.824 Unknown (1Dh): Supported LBA-Change 00:08:21.824 00:08:21.824 Error Log 00:08:21.824 ========= 00:08:21.824 00:08:21.824 Arbitration 00:08:21.824 =========== 00:08:21.824 Arbitration Burst: no limit 00:08:21.824 00:08:21.824 Power Management 00:08:21.824 ================ 00:08:21.824 Number of Power States: 1 00:08:21.824 Current Power State: Power State #0 00:08:21.824 Power State #0: 00:08:21.824 Max Power: 25.00 W 00:08:21.824 Non-Operational State: Operational 00:08:21.824 Entry Latency: 16 microseconds 00:08:21.824 Exit Latency: 4 microseconds 00:08:21.824 Relative Read Throughput: 0 00:08:21.824 Relative Read Latency: 0 00:08:21.824 Relative Write Throughput: 0 00:08:21.824 Relative Write Latency: 0 00:08:21.824 Idle Power: Not Reported 00:08:21.824 Active Power: Not Reported 00:08:21.824 Non-Operational Permissive Mode: Not Supported 00:08:21.824 00:08:21.824 Health Information 00:08:21.824 ================== 00:08:21.824 Critical Warnings: 00:08:21.824 Available Spare Space: OK 00:08:21.824 Temperature: OK 00:08:21.824 Device Reliability: OK 00:08:21.824 Read Only: No 00:08:21.824 Volatile Memory Backup: OK 00:08:21.824 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.824 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.824 Available Spare: 0% 00:08:21.824 Available Spare Threshold: 0% 00:08:21.824 Life Percentage Used: 0% 00:08:21.824 Data Units Read: 1089 00:08:21.824 Data Units Written: 950 00:08:21.824 Host Read Commands: 63102 00:08:21.824 Host Write Commands: 61795 00:08:21.824 Controller Busy Time: 0 minutes 00:08:21.824 Power Cycles: 0 00:08:21.824 Power On Hours: 0 hours 00:08:21.824 Unsafe Shutdowns: 0 00:08:21.824 Unrecoverable Media Errors: 0 00:08:21.824 Lifetime Error Log Entries: 0 00:08:21.825 Warning Temperature Time: 0 minutes 00:08:21.825 Critical Temperature Time: 0 minutes 00:08:21.825 00:08:21.825 Number of Queues 00:08:21.825 ================ 00:08:21.825 Number of I/O Submission Queues: 64 00:08:21.825 Number of I/O Completion Queues: 64 00:08:21.825 00:08:21.825 ZNS Specific Controller Data 00:08:21.825 ============================ 00:08:21.825 Zone Append Size Limit: 0 00:08:21.825 00:08:21.825 00:08:21.825 Active Namespaces 00:08:21.825 ================= 00:08:21.825 Namespace ID:1 00:08:21.825 Error Recovery Timeout: Unlimited 00:08:21.825 Command Set Identifier: NVM (00h) 00:08:21.825 Deallocate: Supported 00:08:21.825 Deallocated/Unwritten Error: Supported 00:08:21.825 Deallocated Read Value: All 0x00 00:08:21.825 Deallocate in Write Zeroes: Not Supported 00:08:21.825 Deallocated Guard Field: 0xFFFF 00:08:21.825 Flush: Supported 00:08:21.825 Reservation: Not Supported 00:08:21.825 Namespace Sharing Capabilities: Private 00:08:21.825 Size (in LBAs): 1310720 (5GiB) 00:08:21.825 Capacity (in LBAs): 1310720 (5GiB) 00:08:21.825 Utilization (in LBAs): 1310720 (5GiB) 00:08:21.825 Thin Provisioning: Not Supported 00:08:21.825 Per-NS Atomic Units: No 00:08:21.825 Maximum Single Source Range Length: 128 00:08:21.825 Maximum Copy Length: 128 00:08:21.825 Maximum Source Range Count: 128 00:08:21.825 NGUID/EUI64 Never Reused: No 00:08:21.825 Namespace Write Protected: No 00:08:21.825 Number of LBA Formats: 8 00:08:21.825 Current LBA Format: LBA Format #04 00:08:21.825 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.825 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.825 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.825 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.825 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.825 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.825 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.825 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.825 00:08:21.825 NVM Specific Namespace Data 00:08:21.825 =========================== 00:08:21.825 Logical Block Storage Tag Mask: 0 00:08:21.825 Protection Information Capabilities: 00:08:21.825 16b Guard Protection Information Storage Tag Support: No 00:08:21.825 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.825 Storage Tag Check Read Support: No 00:08:21.825 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.825 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.825 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.825 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.825 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.825 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.825 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.825 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.825 14:16:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:21.825 14:16:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:22.086 ===================================================== 00:08:22.086 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:22.086 ===================================================== 00:08:22.086 Controller Capabilities/Features 00:08:22.086 ================================ 00:08:22.086 Vendor ID: 1b36 00:08:22.086 Subsystem Vendor ID: 1af4 00:08:22.086 Serial Number: 12342 00:08:22.086 Model Number: QEMU NVMe Ctrl 00:08:22.086 Firmware Version: 8.0.0 00:08:22.086 Recommended Arb Burst: 6 00:08:22.086 IEEE OUI Identifier: 00 54 52 00:08:22.086 Multi-path I/O 00:08:22.086 May have multiple subsystem ports: No 00:08:22.086 May have multiple controllers: No 00:08:22.086 Associated with SR-IOV VF: No 00:08:22.086 Max Data Transfer Size: 524288 00:08:22.086 Max Number of Namespaces: 256 00:08:22.086 Max Number of I/O Queues: 64 00:08:22.086 NVMe Specification Version (VS): 1.4 00:08:22.086 NVMe Specification Version (Identify): 1.4 00:08:22.086 Maximum Queue Entries: 2048 00:08:22.086 Contiguous Queues Required: Yes 00:08:22.086 Arbitration Mechanisms Supported 00:08:22.086 Weighted Round Robin: Not Supported 00:08:22.086 Vendor Specific: Not Supported 00:08:22.086 Reset Timeout: 7500 ms 00:08:22.086 Doorbell Stride: 4 bytes 00:08:22.086 NVM Subsystem Reset: Not Supported 00:08:22.086 Command Sets Supported 00:08:22.086 NVM Command Set: Supported 00:08:22.086 Boot Partition: Not Supported 00:08:22.086 Memory Page Size Minimum: 4096 bytes 00:08:22.086 Memory Page Size Maximum: 65536 bytes 00:08:22.086 Persistent Memory Region: Not Supported 00:08:22.086 Optional Asynchronous Events Supported 00:08:22.086 Namespace Attribute Notices: Supported 00:08:22.086 Firmware Activation Notices: Not Supported 00:08:22.086 ANA Change Notices: Not Supported 00:08:22.086 PLE Aggregate Log Change Notices: Not Supported 00:08:22.086 LBA Status Info Alert Notices: Not Supported 00:08:22.086 EGE Aggregate Log Change Notices: Not Supported 00:08:22.086 Normal NVM Subsystem Shutdown event: Not Supported 00:08:22.086 Zone Descriptor Change Notices: Not Supported 00:08:22.086 Discovery Log Change Notices: Not Supported 00:08:22.086 Controller Attributes 00:08:22.086 128-bit Host Identifier: Not Supported 00:08:22.086 Non-Operational Permissive Mode: Not Supported 00:08:22.086 NVM Sets: Not Supported 00:08:22.086 Read Recovery Levels: Not Supported 00:08:22.086 Endurance Groups: Not Supported 00:08:22.086 Predictable Latency Mode: Not Supported 00:08:22.086 Traffic Based Keep ALive: Not Supported 00:08:22.086 Namespace Granularity: Not Supported 00:08:22.086 SQ Associations: Not Supported 00:08:22.086 UUID List: Not Supported 00:08:22.086 Multi-Domain Subsystem: Not Supported 00:08:22.086 Fixed Capacity Management: Not Supported 00:08:22.086 Variable Capacity Management: Not Supported 00:08:22.086 Delete Endurance Group: Not Supported 00:08:22.086 Delete NVM Set: Not Supported 00:08:22.086 Extended LBA Formats Supported: Supported 00:08:22.086 Flexible Data Placement Supported: Not Supported 00:08:22.086 00:08:22.086 Controller Memory Buffer Support 00:08:22.086 ================================ 00:08:22.086 Supported: No 00:08:22.086 00:08:22.086 Persistent Memory Region Support 00:08:22.086 ================================ 00:08:22.086 Supported: No 00:08:22.086 00:08:22.086 Admin Command Set Attributes 00:08:22.086 ============================ 00:08:22.086 Security Send/Receive: Not Supported 00:08:22.086 Format NVM: Supported 00:08:22.086 Firmware Activate/Download: Not Supported 00:08:22.086 Namespace Management: Supported 00:08:22.086 Device Self-Test: Not Supported 00:08:22.086 Directives: Supported 00:08:22.086 NVMe-MI: Not Supported 00:08:22.086 Virtualization Management: Not Supported 00:08:22.086 Doorbell Buffer Config: Supported 00:08:22.086 Get LBA Status Capability: Not Supported 00:08:22.086 Command & Feature Lockdown Capability: Not Supported 00:08:22.086 Abort Command Limit: 4 00:08:22.086 Async Event Request Limit: 4 00:08:22.086 Number of Firmware Slots: N/A 00:08:22.086 Firmware Slot 1 Read-Only: N/A 00:08:22.086 Firmware Activation Without Reset: N/A 00:08:22.086 Multiple Update Detection Support: N/A 00:08:22.086 Firmware Update Granularity: No Information Provided 00:08:22.086 Per-Namespace SMART Log: Yes 00:08:22.086 Asymmetric Namespace Access Log Page: Not Supported 00:08:22.086 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:22.086 Command Effects Log Page: Supported 00:08:22.086 Get Log Page Extended Data: Supported 00:08:22.086 Telemetry Log Pages: Not Supported 00:08:22.086 Persistent Event Log Pages: Not Supported 00:08:22.086 Supported Log Pages Log Page: May Support 00:08:22.086 Commands Supported & Effects Log Page: Not Supported 00:08:22.086 Feature Identifiers & Effects Log Page:May Support 00:08:22.086 NVMe-MI Commands & Effects Log Page: May Support 00:08:22.086 Data Area 4 for Telemetry Log: Not Supported 00:08:22.086 Error Log Page Entries Supported: 1 00:08:22.086 Keep Alive: Not Supported 00:08:22.086 00:08:22.086 NVM Command Set Attributes 00:08:22.086 ========================== 00:08:22.086 Submission Queue Entry Size 00:08:22.086 Max: 64 00:08:22.086 Min: 64 00:08:22.086 Completion Queue Entry Size 00:08:22.086 Max: 16 00:08:22.086 Min: 16 00:08:22.086 Number of Namespaces: 256 00:08:22.086 Compare Command: Supported 00:08:22.086 Write Uncorrectable Command: Not Supported 00:08:22.086 Dataset Management Command: Supported 00:08:22.086 Write Zeroes Command: Supported 00:08:22.086 Set Features Save Field: Supported 00:08:22.086 Reservations: Not Supported 00:08:22.086 Timestamp: Supported 00:08:22.086 Copy: Supported 00:08:22.086 Volatile Write Cache: Present 00:08:22.086 Atomic Write Unit (Normal): 1 00:08:22.086 Atomic Write Unit (PFail): 1 00:08:22.086 Atomic Compare & Write Unit: 1 00:08:22.086 Fused Compare & Write: Not Supported 00:08:22.086 Scatter-Gather List 00:08:22.086 SGL Command Set: Supported 00:08:22.086 SGL Keyed: Not Supported 00:08:22.086 SGL Bit Bucket Descriptor: Not Supported 00:08:22.086 SGL Metadata Pointer: Not Supported 00:08:22.086 Oversized SGL: Not Supported 00:08:22.086 SGL Metadata Address: Not Supported 00:08:22.086 SGL Offset: Not Supported 00:08:22.086 Transport SGL Data Block: Not Supported 00:08:22.086 Replay Protected Memory Block: Not Supported 00:08:22.086 00:08:22.086 Firmware Slot Information 00:08:22.086 ========================= 00:08:22.086 Active slot: 1 00:08:22.086 Slot 1 Firmware Revision: 1.0 00:08:22.086 00:08:22.086 00:08:22.086 Commands Supported and Effects 00:08:22.086 ============================== 00:08:22.086 Admin Commands 00:08:22.086 -------------- 00:08:22.086 Delete I/O Submission Queue (00h): Supported 00:08:22.086 Create I/O Submission Queue (01h): Supported 00:08:22.086 Get Log Page (02h): Supported 00:08:22.086 Delete I/O Completion Queue (04h): Supported 00:08:22.086 Create I/O Completion Queue (05h): Supported 00:08:22.086 Identify (06h): Supported 00:08:22.086 Abort (08h): Supported 00:08:22.086 Set Features (09h): Supported 00:08:22.086 Get Features (0Ah): Supported 00:08:22.086 Asynchronous Event Request (0Ch): Supported 00:08:22.086 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:22.086 Directive Send (19h): Supported 00:08:22.086 Directive Receive (1Ah): Supported 00:08:22.086 Virtualization Management (1Ch): Supported 00:08:22.086 Doorbell Buffer Config (7Ch): Supported 00:08:22.086 Format NVM (80h): Supported LBA-Change 00:08:22.086 I/O Commands 00:08:22.086 ------------ 00:08:22.086 Flush (00h): Supported LBA-Change 00:08:22.086 Write (01h): Supported LBA-Change 00:08:22.086 Read (02h): Supported 00:08:22.087 Compare (05h): Supported 00:08:22.087 Write Zeroes (08h): Supported LBA-Change 00:08:22.087 Dataset Management (09h): Supported LBA-Change 00:08:22.087 Unknown (0Ch): Supported 00:08:22.087 Unknown (12h): Supported 00:08:22.087 Copy (19h): Supported LBA-Change 00:08:22.087 Unknown (1Dh): Supported LBA-Change 00:08:22.087 00:08:22.087 Error Log 00:08:22.087 ========= 00:08:22.087 00:08:22.087 Arbitration 00:08:22.087 =========== 00:08:22.087 Arbitration Burst: no limit 00:08:22.087 00:08:22.087 Power Management 00:08:22.087 ================ 00:08:22.087 Number of Power States: 1 00:08:22.087 Current Power State: Power State #0 00:08:22.087 Power State #0: 00:08:22.087 Max Power: 25.00 W 00:08:22.087 Non-Operational State: Operational 00:08:22.087 Entry Latency: 16 microseconds 00:08:22.087 Exit Latency: 4 microseconds 00:08:22.087 Relative Read Throughput: 0 00:08:22.087 Relative Read Latency: 0 00:08:22.087 Relative Write Throughput: 0 00:08:22.087 Relative Write Latency: 0 00:08:22.087 Idle Power: Not Reported 00:08:22.087 Active Power: Not Reported 00:08:22.087 Non-Operational Permissive Mode: Not Supported 00:08:22.087 00:08:22.087 Health Information 00:08:22.087 ================== 00:08:22.087 Critical Warnings: 00:08:22.087 Available Spare Space: OK 00:08:22.087 Temperature: OK 00:08:22.087 Device Reliability: OK 00:08:22.087 Read Only: No 00:08:22.087 Volatile Memory Backup: OK 00:08:22.087 Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.087 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:22.087 Available Spare: 0% 00:08:22.087 Available Spare Threshold: 0% 00:08:22.087 Life Percentage Used: 0% 00:08:22.087 Data Units Read: 2437 00:08:22.087 Data Units Written: 2224 00:08:22.087 Host Read Commands: 131598 00:08:22.087 Host Write Commands: 129867 00:08:22.087 Controller Busy Time: 0 minutes 00:08:22.087 Power Cycles: 0 00:08:22.087 Power On Hours: 0 hours 00:08:22.087 Unsafe Shutdowns: 0 00:08:22.087 Unrecoverable Media Errors: 0 00:08:22.087 Lifetime Error Log Entries: 0 00:08:22.087 Warning Temperature Time: 0 minutes 00:08:22.087 Critical Temperature Time: 0 minutes 00:08:22.087 00:08:22.087 Number of Queues 00:08:22.087 ================ 00:08:22.087 Number of I/O Submission Queues: 64 00:08:22.087 Number of I/O Completion Queues: 64 00:08:22.087 00:08:22.087 ZNS Specific Controller Data 00:08:22.087 ============================ 00:08:22.087 Zone Append Size Limit: 0 00:08:22.087 00:08:22.087 00:08:22.087 Active Namespaces 00:08:22.087 ================= 00:08:22.087 Namespace ID:1 00:08:22.087 Error Recovery Timeout: Unlimited 00:08:22.087 Command Set Identifier: NVM (00h) 00:08:22.087 Deallocate: Supported 00:08:22.087 Deallocated/Unwritten Error: Supported 00:08:22.087 Deallocated Read Value: All 0x00 00:08:22.087 Deallocate in Write Zeroes: Not Supported 00:08:22.087 Deallocated Guard Field: 0xFFFF 00:08:22.087 Flush: Supported 00:08:22.087 Reservation: Not Supported 00:08:22.087 Namespace Sharing Capabilities: Private 00:08:22.087 Size (in LBAs): 1048576 (4GiB) 00:08:22.087 Capacity (in LBAs): 1048576 (4GiB) 00:08:22.087 Utilization (in LBAs): 1048576 (4GiB) 00:08:22.087 Thin Provisioning: Not Supported 00:08:22.087 Per-NS Atomic Units: No 00:08:22.087 Maximum Single Source Range Length: 128 00:08:22.087 Maximum Copy Length: 128 00:08:22.087 Maximum Source Range Count: 128 00:08:22.087 NGUID/EUI64 Never Reused: No 00:08:22.087 Namespace Write Protected: No 00:08:22.087 Number of LBA Formats: 8 00:08:22.087 Current LBA Format: LBA Format #04 00:08:22.087 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.087 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.087 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.087 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.087 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.087 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.087 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.087 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.087 00:08:22.087 NVM Specific Namespace Data 00:08:22.087 =========================== 00:08:22.087 Logical Block Storage Tag Mask: 0 00:08:22.087 Protection Information Capabilities: 00:08:22.087 16b Guard Protection Information Storage Tag Support: No 00:08:22.087 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.087 Storage Tag Check Read Support: No 00:08:22.087 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Namespace ID:2 00:08:22.087 Error Recovery Timeout: Unlimited 00:08:22.087 Command Set Identifier: NVM (00h) 00:08:22.087 Deallocate: Supported 00:08:22.087 Deallocated/Unwritten Error: Supported 00:08:22.087 Deallocated Read Value: All 0x00 00:08:22.087 Deallocate in Write Zeroes: Not Supported 00:08:22.087 Deallocated Guard Field: 0xFFFF 00:08:22.087 Flush: Supported 00:08:22.087 Reservation: Not Supported 00:08:22.087 Namespace Sharing Capabilities: Private 00:08:22.087 Size (in LBAs): 1048576 (4GiB) 00:08:22.087 Capacity (in LBAs): 1048576 (4GiB) 00:08:22.087 Utilization (in LBAs): 1048576 (4GiB) 00:08:22.087 Thin Provisioning: Not Supported 00:08:22.087 Per-NS Atomic Units: No 00:08:22.087 Maximum Single Source Range Length: 128 00:08:22.087 Maximum Copy Length: 128 00:08:22.087 Maximum Source Range Count: 128 00:08:22.087 NGUID/EUI64 Never Reused: No 00:08:22.087 Namespace Write Protected: No 00:08:22.087 Number of LBA Formats: 8 00:08:22.087 Current LBA Format: LBA Format #04 00:08:22.087 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.087 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.087 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.087 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.087 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.087 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.087 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.087 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.087 00:08:22.087 NVM Specific Namespace Data 00:08:22.087 =========================== 00:08:22.087 Logical Block Storage Tag Mask: 0 00:08:22.087 Protection Information Capabilities: 00:08:22.087 16b Guard Protection Information Storage Tag Support: No 00:08:22.087 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.087 Storage Tag Check Read Support: No 00:08:22.087 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.087 Namespace ID:3 00:08:22.087 Error Recovery Timeout: Unlimited 00:08:22.087 Command Set Identifier: NVM (00h) 00:08:22.087 Deallocate: Supported 00:08:22.087 Deallocated/Unwritten Error: Supported 00:08:22.087 Deallocated Read Value: All 0x00 00:08:22.087 Deallocate in Write Zeroes: Not Supported 00:08:22.087 Deallocated Guard Field: 0xFFFF 00:08:22.087 Flush: Supported 00:08:22.087 Reservation: Not Supported 00:08:22.087 Namespace Sharing Capabilities: Private 00:08:22.087 Size (in LBAs): 1048576 (4GiB) 00:08:22.087 Capacity (in LBAs): 1048576 (4GiB) 00:08:22.087 Utilization (in LBAs): 1048576 (4GiB) 00:08:22.087 Thin Provisioning: Not Supported 00:08:22.087 Per-NS Atomic Units: No 00:08:22.087 Maximum Single Source Range Length: 128 00:08:22.087 Maximum Copy Length: 128 00:08:22.087 Maximum Source Range Count: 128 00:08:22.087 NGUID/EUI64 Never Reused: No 00:08:22.087 Namespace Write Protected: No 00:08:22.087 Number of LBA Formats: 8 00:08:22.087 Current LBA Format: LBA Format #04 00:08:22.087 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.087 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.087 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.087 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.087 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.088 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.088 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.088 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.088 00:08:22.088 NVM Specific Namespace Data 00:08:22.088 =========================== 00:08:22.088 Logical Block Storage Tag Mask: 0 00:08:22.088 Protection Information Capabilities: 00:08:22.088 16b Guard Protection Information Storage Tag Support: No 00:08:22.088 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.088 Storage Tag Check Read Support: No 00:08:22.088 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.088 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.088 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.088 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.088 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.088 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.088 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.088 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.088 14:16:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:22.088 14:16:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:22.088 ===================================================== 00:08:22.088 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:22.088 ===================================================== 00:08:22.088 Controller Capabilities/Features 00:08:22.088 ================================ 00:08:22.088 Vendor ID: 1b36 00:08:22.088 Subsystem Vendor ID: 1af4 00:08:22.088 Serial Number: 12343 00:08:22.088 Model Number: QEMU NVMe Ctrl 00:08:22.088 Firmware Version: 8.0.0 00:08:22.088 Recommended Arb Burst: 6 00:08:22.088 IEEE OUI Identifier: 00 54 52 00:08:22.088 Multi-path I/O 00:08:22.088 May have multiple subsystem ports: No 00:08:22.088 May have multiple controllers: Yes 00:08:22.088 Associated with SR-IOV VF: No 00:08:22.088 Max Data Transfer Size: 524288 00:08:22.088 Max Number of Namespaces: 256 00:08:22.088 Max Number of I/O Queues: 64 00:08:22.088 NVMe Specification Version (VS): 1.4 00:08:22.088 NVMe Specification Version (Identify): 1.4 00:08:22.088 Maximum Queue Entries: 2048 00:08:22.088 Contiguous Queues Required: Yes 00:08:22.088 Arbitration Mechanisms Supported 00:08:22.088 Weighted Round Robin: Not Supported 00:08:22.088 Vendor Specific: Not Supported 00:08:22.088 Reset Timeout: 7500 ms 00:08:22.088 Doorbell Stride: 4 bytes 00:08:22.088 NVM Subsystem Reset: Not Supported 00:08:22.088 Command Sets Supported 00:08:22.088 NVM Command Set: Supported 00:08:22.088 Boot Partition: Not Supported 00:08:22.088 Memory Page Size Minimum: 4096 bytes 00:08:22.088 Memory Page Size Maximum: 65536 bytes 00:08:22.088 Persistent Memory Region: Not Supported 00:08:22.088 Optional Asynchronous Events Supported 00:08:22.088 Namespace Attribute Notices: Supported 00:08:22.088 Firmware Activation Notices: Not Supported 00:08:22.088 ANA Change Notices: Not Supported 00:08:22.088 PLE Aggregate Log Change Notices: Not Supported 00:08:22.088 LBA Status Info Alert Notices: Not Supported 00:08:22.088 EGE Aggregate Log Change Notices: Not Supported 00:08:22.088 Normal NVM Subsystem Shutdown event: Not Supported 00:08:22.088 Zone Descriptor Change Notices: Not Supported 00:08:22.088 Discovery Log Change Notices: Not Supported 00:08:22.088 Controller Attributes 00:08:22.088 128-bit Host Identifier: Not Supported 00:08:22.088 Non-Operational Permissive Mode: Not Supported 00:08:22.088 NVM Sets: Not Supported 00:08:22.088 Read Recovery Levels: Not Supported 00:08:22.088 Endurance Groups: Supported 00:08:22.088 Predictable Latency Mode: Not Supported 00:08:22.088 Traffic Based Keep ALive: Not Supported 00:08:22.088 Namespace Granularity: Not Supported 00:08:22.088 SQ Associations: Not Supported 00:08:22.088 UUID List: Not Supported 00:08:22.088 Multi-Domain Subsystem: Not Supported 00:08:22.088 Fixed Capacity Management: Not Supported 00:08:22.088 Variable Capacity Management: Not Supported 00:08:22.088 Delete Endurance Group: Not Supported 00:08:22.088 Delete NVM Set: Not Supported 00:08:22.088 Extended LBA Formats Supported: Supported 00:08:22.088 Flexible Data Placement Supported: Supported 00:08:22.088 00:08:22.088 Controller Memory Buffer Support 00:08:22.088 ================================ 00:08:22.088 Supported: No 00:08:22.088 00:08:22.088 Persistent Memory Region Support 00:08:22.088 ================================ 00:08:22.088 Supported: No 00:08:22.088 00:08:22.088 Admin Command Set Attributes 00:08:22.088 ============================ 00:08:22.088 Security Send/Receive: Not Supported 00:08:22.088 Format NVM: Supported 00:08:22.088 Firmware Activate/Download: Not Supported 00:08:22.088 Namespace Management: Supported 00:08:22.088 Device Self-Test: Not Supported 00:08:22.088 Directives: Supported 00:08:22.088 NVMe-MI: Not Supported 00:08:22.088 Virtualization Management: Not Supported 00:08:22.088 Doorbell Buffer Config: Supported 00:08:22.088 Get LBA Status Capability: Not Supported 00:08:22.088 Command & Feature Lockdown Capability: Not Supported 00:08:22.088 Abort Command Limit: 4 00:08:22.088 Async Event Request Limit: 4 00:08:22.088 Number of Firmware Slots: N/A 00:08:22.088 Firmware Slot 1 Read-Only: N/A 00:08:22.088 Firmware Activation Without Reset: N/A 00:08:22.088 Multiple Update Detection Support: N/A 00:08:22.088 Firmware Update Granularity: No Information Provided 00:08:22.088 Per-Namespace SMART Log: Yes 00:08:22.088 Asymmetric Namespace Access Log Page: Not Supported 00:08:22.088 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:22.088 Command Effects Log Page: Supported 00:08:22.088 Get Log Page Extended Data: Supported 00:08:22.088 Telemetry Log Pages: Not Supported 00:08:22.088 Persistent Event Log Pages: Not Supported 00:08:22.088 Supported Log Pages Log Page: May Support 00:08:22.088 Commands Supported & Effects Log Page: Not Supported 00:08:22.088 Feature Identifiers & Effects Log Page:May Support 00:08:22.088 NVMe-MI Commands & Effects Log Page: May Support 00:08:22.088 Data Area 4 for Telemetry Log: Not Supported 00:08:22.088 Error Log Page Entries Supported: 1 00:08:22.088 Keep Alive: Not Supported 00:08:22.088 00:08:22.088 NVM Command Set Attributes 00:08:22.088 ========================== 00:08:22.088 Submission Queue Entry Size 00:08:22.088 Max: 64 00:08:22.088 Min: 64 00:08:22.088 Completion Queue Entry Size 00:08:22.088 Max: 16 00:08:22.088 Min: 16 00:08:22.088 Number of Namespaces: 256 00:08:22.088 Compare Command: Supported 00:08:22.088 Write Uncorrectable Command: Not Supported 00:08:22.088 Dataset Management Command: Supported 00:08:22.088 Write Zeroes Command: Supported 00:08:22.088 Set Features Save Field: Supported 00:08:22.088 Reservations: Not Supported 00:08:22.088 Timestamp: Supported 00:08:22.088 Copy: Supported 00:08:22.088 Volatile Write Cache: Present 00:08:22.088 Atomic Write Unit (Normal): 1 00:08:22.088 Atomic Write Unit (PFail): 1 00:08:22.088 Atomic Compare & Write Unit: 1 00:08:22.088 Fused Compare & Write: Not Supported 00:08:22.088 Scatter-Gather List 00:08:22.088 SGL Command Set: Supported 00:08:22.088 SGL Keyed: Not Supported 00:08:22.088 SGL Bit Bucket Descriptor: Not Supported 00:08:22.088 SGL Metadata Pointer: Not Supported 00:08:22.088 Oversized SGL: Not Supported 00:08:22.088 SGL Metadata Address: Not Supported 00:08:22.088 SGL Offset: Not Supported 00:08:22.088 Transport SGL Data Block: Not Supported 00:08:22.088 Replay Protected Memory Block: Not Supported 00:08:22.088 00:08:22.088 Firmware Slot Information 00:08:22.088 ========================= 00:08:22.088 Active slot: 1 00:08:22.088 Slot 1 Firmware Revision: 1.0 00:08:22.088 00:08:22.088 00:08:22.088 Commands Supported and Effects 00:08:22.088 ============================== 00:08:22.088 Admin Commands 00:08:22.088 -------------- 00:08:22.088 Delete I/O Submission Queue (00h): Supported 00:08:22.088 Create I/O Submission Queue (01h): Supported 00:08:22.088 Get Log Page (02h): Supported 00:08:22.088 Delete I/O Completion Queue (04h): Supported 00:08:22.088 Create I/O Completion Queue (05h): Supported 00:08:22.088 Identify (06h): Supported 00:08:22.088 Abort (08h): Supported 00:08:22.088 Set Features (09h): Supported 00:08:22.088 Get Features (0Ah): Supported 00:08:22.089 Asynchronous Event Request (0Ch): Supported 00:08:22.089 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:22.089 Directive Send (19h): Supported 00:08:22.089 Directive Receive (1Ah): Supported 00:08:22.089 Virtualization Management (1Ch): Supported 00:08:22.089 Doorbell Buffer Config (7Ch): Supported 00:08:22.089 Format NVM (80h): Supported LBA-Change 00:08:22.089 I/O Commands 00:08:22.089 ------------ 00:08:22.089 Flush (00h): Supported LBA-Change 00:08:22.089 Write (01h): Supported LBA-Change 00:08:22.089 Read (02h): Supported 00:08:22.089 Compare (05h): Supported 00:08:22.089 Write Zeroes (08h): Supported LBA-Change 00:08:22.089 Dataset Management (09h): Supported LBA-Change 00:08:22.089 Unknown (0Ch): Supported 00:08:22.089 Unknown (12h): Supported 00:08:22.089 Copy (19h): Supported LBA-Change 00:08:22.089 Unknown (1Dh): Supported LBA-Change 00:08:22.089 00:08:22.089 Error Log 00:08:22.089 ========= 00:08:22.089 00:08:22.089 Arbitration 00:08:22.089 =========== 00:08:22.089 Arbitration Burst: no limit 00:08:22.089 00:08:22.089 Power Management 00:08:22.089 ================ 00:08:22.089 Number of Power States: 1 00:08:22.089 Current Power State: Power State #0 00:08:22.089 Power State #0: 00:08:22.089 Max Power: 25.00 W 00:08:22.089 Non-Operational State: Operational 00:08:22.089 Entry Latency: 16 microseconds 00:08:22.089 Exit Latency: 4 microseconds 00:08:22.089 Relative Read Throughput: 0 00:08:22.089 Relative Read Latency: 0 00:08:22.089 Relative Write Throughput: 0 00:08:22.089 Relative Write Latency: 0 00:08:22.089 Idle Power: Not Reported 00:08:22.089 Active Power: Not Reported 00:08:22.089 Non-Operational Permissive Mode: Not Supported 00:08:22.089 00:08:22.089 Health Information 00:08:22.089 ================== 00:08:22.089 Critical Warnings: 00:08:22.089 Available Spare Space: OK 00:08:22.089 Temperature: OK 00:08:22.089 Device Reliability: OK 00:08:22.089 Read Only: No 00:08:22.089 Volatile Memory Backup: OK 00:08:22.089 Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.089 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:22.089 Available Spare: 0% 00:08:22.089 Available Spare Threshold: 0% 00:08:22.089 Life Percentage Used: 0% 00:08:22.089 Data Units Read: 1088 00:08:22.089 Data Units Written: 1017 00:08:22.089 Host Read Commands: 46138 00:08:22.089 Host Write Commands: 45561 00:08:22.089 Controller Busy Time: 0 minutes 00:08:22.089 Power Cycles: 0 00:08:22.089 Power On Hours: 0 hours 00:08:22.089 Unsafe Shutdowns: 0 00:08:22.089 Unrecoverable Media Errors: 0 00:08:22.089 Lifetime Error Log Entries: 0 00:08:22.089 Warning Temperature Time: 0 minutes 00:08:22.089 Critical Temperature Time: 0 minutes 00:08:22.089 00:08:22.089 Number of Queues 00:08:22.089 ================ 00:08:22.089 Number of I/O Submission Queues: 64 00:08:22.089 Number of I/O Completion Queues: 64 00:08:22.089 00:08:22.089 ZNS Specific Controller Data 00:08:22.089 ============================ 00:08:22.089 Zone Append Size Limit: 0 00:08:22.089 00:08:22.089 00:08:22.089 Active Namespaces 00:08:22.089 ================= 00:08:22.089 Namespace ID:1 00:08:22.089 Error Recovery Timeout: Unlimited 00:08:22.089 Command Set Identifier: NVM (00h) 00:08:22.089 Deallocate: Supported 00:08:22.089 Deallocated/Unwritten Error: Supported 00:08:22.089 Deallocated Read Value: All 0x00 00:08:22.089 Deallocate in Write Zeroes: Not Supported 00:08:22.089 Deallocated Guard Field: 0xFFFF 00:08:22.089 Flush: Supported 00:08:22.089 Reservation: Not Supported 00:08:22.089 Namespace Sharing Capabilities: Multiple Controllers 00:08:22.089 Size (in LBAs): 262144 (1GiB) 00:08:22.089 Capacity (in LBAs): 262144 (1GiB) 00:08:22.089 Utilization (in LBAs): 262144 (1GiB) 00:08:22.089 Thin Provisioning: Not Supported 00:08:22.089 Per-NS Atomic Units: No 00:08:22.089 Maximum Single Source Range Length: 128 00:08:22.089 Maximum Copy Length: 128 00:08:22.089 Maximum Source Range Count: 128 00:08:22.089 NGUID/EUI64 Never Reused: No 00:08:22.089 Namespace Write Protected: No 00:08:22.089 Endurance group ID: 1 00:08:22.089 Number of LBA Formats: 8 00:08:22.089 Current LBA Format: LBA Format #04 00:08:22.089 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.089 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.089 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.089 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.089 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.089 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.089 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.089 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.089 00:08:22.089 Get Feature FDP: 00:08:22.089 ================ 00:08:22.089 Enabled: Yes 00:08:22.089 FDP configuration index: 0 00:08:22.089 00:08:22.089 FDP configurations log page 00:08:22.089 =========================== 00:08:22.089 Number of FDP configurations: 1 00:08:22.089 Version: 0 00:08:22.089 Size: 112 00:08:22.089 FDP Configuration Descriptor: 0 00:08:22.089 Descriptor Size: 96 00:08:22.089 Reclaim Group Identifier format: 2 00:08:22.089 FDP Volatile Write Cache: Not Present 00:08:22.089 FDP Configuration: Valid 00:08:22.089 Vendor Specific Size: 0 00:08:22.089 Number of Reclaim Groups: 2 00:08:22.089 Number of Recalim Unit Handles: 8 00:08:22.089 Max Placement Identifiers: 128 00:08:22.089 Number of Namespaces Suppprted: 256 00:08:22.089 Reclaim unit Nominal Size: 6000000 bytes 00:08:22.089 Estimated Reclaim Unit Time Limit: Not Reported 00:08:22.089 RUH Desc #000: RUH Type: Initially Isolated 00:08:22.089 RUH Desc #001: RUH Type: Initially Isolated 00:08:22.089 RUH Desc #002: RUH Type: Initially Isolated 00:08:22.089 RUH Desc #003: RUH Type: Initially Isolated 00:08:22.089 RUH Desc #004: RUH Type: Initially Isolated 00:08:22.089 RUH Desc #005: RUH Type: Initially Isolated 00:08:22.089 RUH Desc #006: RUH Type: Initially Isolated 00:08:22.089 RUH Desc #007: RUH Type: Initially Isolated 00:08:22.089 00:08:22.089 FDP reclaim unit handle usage log page 00:08:22.089 ====================================== 00:08:22.089 Number of Reclaim Unit Handles: 8 00:08:22.089 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:22.089 RUH Usage Desc #001: RUH Attributes: Unused 00:08:22.089 RUH Usage Desc #002: RUH Attributes: Unused 00:08:22.089 RUH Usage Desc #003: RUH Attributes: Unused 00:08:22.089 RUH Usage Desc #004: RUH Attributes: Unused 00:08:22.089 RUH Usage Desc #005: RUH Attributes: Unused 00:08:22.089 RUH Usage Desc #006: RUH Attributes: Unused 00:08:22.089 RUH Usage Desc #007: RUH Attributes: Unused 00:08:22.089 00:08:22.089 FDP statistics log page 00:08:22.089 ======================= 00:08:22.089 Host bytes with metadata written: 629579776 00:08:22.089 Media bytes with metadata written: 632283136 00:08:22.089 Media bytes erased: 0 00:08:22.089 00:08:22.089 FDP events log page 00:08:22.089 =================== 00:08:22.089 Number of FDP events: 0 00:08:22.089 00:08:22.089 NVM Specific Namespace Data 00:08:22.089 =========================== 00:08:22.089 Logical Block Storage Tag Mask: 0 00:08:22.089 Protection Information Capabilities: 00:08:22.089 16b Guard Protection Information Storage Tag Support: No 00:08:22.089 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.089 Storage Tag Check Read Support: No 00:08:22.089 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.089 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.089 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.089 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.089 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.089 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.089 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.089 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.089 00:08:22.089 real 0m1.044s 00:08:22.089 user 0m0.337s 00:08:22.089 sys 0m0.505s 00:08:22.089 14:16:03 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.089 ************************************ 00:08:22.089 END TEST nvme_identify 00:08:22.089 ************************************ 00:08:22.089 14:16:03 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:22.348 14:16:03 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:22.348 14:16:03 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.348 14:16:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.348 14:16:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.348 ************************************ 00:08:22.348 START TEST nvme_perf 00:08:22.348 ************************************ 00:08:22.348 14:16:03 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:22.348 14:16:03 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:23.724 Initializing NVMe Controllers 00:08:23.724 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:23.724 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:23.724 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:23.724 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:23.724 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:23.724 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:23.724 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:23.724 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:23.724 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:23.724 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:23.724 Initialization complete. Launching workers. 00:08:23.724 ======================================================== 00:08:23.724 Latency(us) 00:08:23.724 Device Information : IOPS MiB/s Average min max 00:08:23.724 PCIE (0000:00:13.0) NSID 1 from core 0: 17490.91 204.97 7320.19 5064.87 27539.44 00:08:23.724 PCIE (0000:00:10.0) NSID 1 from core 0: 17490.91 204.97 7309.66 4761.34 26713.53 00:08:23.724 PCIE (0000:00:11.0) NSID 1 from core 0: 17490.91 204.97 7300.10 4611.24 25589.84 00:08:23.724 PCIE (0000:00:12.0) NSID 1 from core 0: 17490.91 204.97 7289.66 4072.25 24797.46 00:08:23.724 PCIE (0000:00:12.0) NSID 2 from core 0: 17490.91 204.97 7279.06 3910.61 23760.05 00:08:23.724 PCIE (0000:00:12.0) NSID 3 from core 0: 17490.91 204.97 7268.71 3651.31 22661.42 00:08:23.724 ======================================================== 00:08:23.724 Total : 104945.45 1229.83 7294.56 3651.31 27539.44 00:08:23.724 00:08:23.724 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:23.724 ================================================================================= 00:08:23.724 1.00000% : 6125.095us 00:08:23.724 10.00000% : 6251.126us 00:08:23.724 25.00000% : 6427.569us 00:08:23.724 50.00000% : 6755.249us 00:08:23.724 75.00000% : 7057.723us 00:08:23.724 90.00000% : 9023.803us 00:08:23.724 95.00000% : 10032.049us 00:08:23.724 98.00000% : 13611.323us 00:08:23.724 99.00000% : 15627.815us 00:08:23.724 99.50000% : 21475.643us 00:08:23.724 99.90000% : 27222.646us 00:08:23.724 99.99000% : 27625.945us 00:08:23.724 99.99900% : 27625.945us 00:08:23.724 99.99990% : 27625.945us 00:08:23.724 99.99999% : 27625.945us 00:08:23.724 00:08:23.724 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:23.724 ================================================================================= 00:08:23.724 1.00000% : 6049.477us 00:08:23.724 10.00000% : 6200.714us 00:08:23.724 25.00000% : 6402.363us 00:08:23.724 50.00000% : 6755.249us 00:08:23.724 75.00000% : 7158.548us 00:08:23.724 90.00000% : 9023.803us 00:08:23.724 95.00000% : 9931.225us 00:08:23.724 98.00000% : 13510.498us 00:08:23.724 99.00000% : 15728.640us 00:08:23.724 99.50000% : 20769.871us 00:08:23.724 99.90000% : 26416.049us 00:08:23.724 99.99000% : 26819.348us 00:08:23.724 99.99900% : 26819.348us 00:08:23.724 99.99990% : 26819.348us 00:08:23.724 99.99999% : 26819.348us 00:08:23.724 00:08:23.724 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:23.724 ================================================================================= 00:08:23.724 1.00000% : 6125.095us 00:08:23.724 10.00000% : 6251.126us 00:08:23.724 25.00000% : 6427.569us 00:08:23.724 50.00000% : 6755.249us 00:08:23.724 75.00000% : 7057.723us 00:08:23.724 90.00000% : 8973.391us 00:08:23.724 95.00000% : 10032.049us 00:08:23.724 98.00000% : 13107.200us 00:08:23.724 99.00000% : 16434.412us 00:08:23.724 99.50000% : 19761.625us 00:08:23.724 99.90000% : 25206.154us 00:08:23.724 99.99000% : 25609.452us 00:08:23.724 99.99900% : 25609.452us 00:08:23.724 99.99990% : 25609.452us 00:08:23.724 99.99999% : 25609.452us 00:08:23.724 00:08:23.724 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:23.724 ================================================================================= 00:08:23.724 1.00000% : 6125.095us 00:08:23.724 10.00000% : 6251.126us 00:08:23.724 25.00000% : 6427.569us 00:08:23.724 50.00000% : 6755.249us 00:08:23.725 75.00000% : 7057.723us 00:08:23.725 90.00000% : 8922.978us 00:08:23.725 95.00000% : 10384.935us 00:08:23.725 98.00000% : 13308.849us 00:08:23.725 99.00000% : 16535.237us 00:08:23.725 99.50000% : 18955.028us 00:08:23.725 99.90000% : 24399.557us 00:08:23.725 99.99000% : 24802.855us 00:08:23.725 99.99900% : 24802.855us 00:08:23.725 99.99990% : 24802.855us 00:08:23.725 99.99999% : 24802.855us 00:08:23.725 00:08:23.725 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:23.725 ================================================================================= 00:08:23.725 1.00000% : 6099.889us 00:08:23.725 10.00000% : 6251.126us 00:08:23.725 25.00000% : 6427.569us 00:08:23.725 50.00000% : 6755.249us 00:08:23.725 75.00000% : 7057.723us 00:08:23.725 90.00000% : 8973.391us 00:08:23.725 95.00000% : 10284.111us 00:08:23.725 98.00000% : 13409.674us 00:08:23.725 99.00000% : 16232.763us 00:08:23.725 99.50000% : 17845.957us 00:08:23.725 99.90000% : 23391.311us 00:08:23.725 99.99000% : 23794.609us 00:08:23.725 99.99900% : 23794.609us 00:08:23.725 99.99990% : 23794.609us 00:08:23.725 99.99999% : 23794.609us 00:08:23.725 00:08:23.725 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:23.725 ================================================================================= 00:08:23.725 1.00000% : 6099.889us 00:08:23.725 10.00000% : 6251.126us 00:08:23.725 25.00000% : 6427.569us 00:08:23.725 50.00000% : 6755.249us 00:08:23.725 75.00000% : 7057.723us 00:08:23.725 90.00000% : 9023.803us 00:08:23.725 95.00000% : 10032.049us 00:08:23.725 98.00000% : 13510.498us 00:08:23.725 99.00000% : 15930.289us 00:08:23.725 99.50000% : 16837.711us 00:08:23.725 99.90000% : 22282.240us 00:08:23.725 99.99000% : 22685.538us 00:08:23.725 99.99900% : 22685.538us 00:08:23.725 99.99990% : 22685.538us 00:08:23.725 99.99999% : 22685.538us 00:08:23.725 00:08:23.725 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:23.725 ============================================================================== 00:08:23.725 Range in us Cumulative IO count 00:08:23.725 5041.231 - 5066.437: 0.0057% ( 1) 00:08:23.725 5066.437 - 5091.643: 0.0228% ( 3) 00:08:23.725 5091.643 - 5116.849: 0.0285% ( 1) 00:08:23.725 5116.849 - 5142.055: 0.0399% ( 2) 00:08:23.725 5142.055 - 5167.262: 0.0513% ( 2) 00:08:23.725 5167.262 - 5192.468: 0.0684% ( 3) 00:08:23.725 5192.468 - 5217.674: 0.0798% ( 2) 00:08:23.725 5217.674 - 5242.880: 0.0912% ( 2) 00:08:23.725 5242.880 - 5268.086: 0.1083% ( 3) 00:08:23.725 5268.086 - 5293.292: 0.1198% ( 2) 00:08:23.725 5293.292 - 5318.498: 0.1312% ( 2) 00:08:23.725 5318.498 - 5343.705: 0.1426% ( 2) 00:08:23.725 5343.705 - 5368.911: 0.1540% ( 2) 00:08:23.725 5368.911 - 5394.117: 0.1654% ( 2) 00:08:23.725 5394.117 - 5419.323: 0.1768% ( 2) 00:08:23.725 5419.323 - 5444.529: 0.1882% ( 2) 00:08:23.725 5444.529 - 5469.735: 0.1996% ( 2) 00:08:23.725 5469.735 - 5494.942: 0.2167% ( 3) 00:08:23.725 5494.942 - 5520.148: 0.2281% ( 2) 00:08:23.725 5520.148 - 5545.354: 0.2338% ( 1) 00:08:23.725 5545.354 - 5570.560: 0.2452% ( 2) 00:08:23.725 5570.560 - 5595.766: 0.2566% ( 2) 00:08:23.725 5595.766 - 5620.972: 0.2680% ( 2) 00:08:23.725 5620.972 - 5646.178: 0.2794% ( 2) 00:08:23.725 5646.178 - 5671.385: 0.2965% ( 3) 00:08:23.725 5671.385 - 5696.591: 0.3079% ( 2) 00:08:23.725 5696.591 - 5721.797: 0.3193% ( 2) 00:08:23.725 5721.797 - 5747.003: 0.3307% ( 2) 00:08:23.725 5747.003 - 5772.209: 0.3422% ( 2) 00:08:23.725 5772.209 - 5797.415: 0.3536% ( 2) 00:08:23.725 5797.415 - 5822.622: 0.3650% ( 2) 00:08:23.725 6024.271 - 6049.477: 0.3821% ( 3) 00:08:23.725 6049.477 - 6074.683: 0.4961% ( 20) 00:08:23.725 6074.683 - 6099.889: 0.9922% ( 87) 00:08:23.725 6099.889 - 6125.095: 2.0586% ( 187) 00:08:23.725 6125.095 - 6150.302: 3.4044% ( 236) 00:08:23.725 6150.302 - 6175.508: 4.7217% ( 231) 00:08:23.725 6175.508 - 6200.714: 6.7176% ( 350) 00:08:23.725 6200.714 - 6225.920: 8.7420% ( 355) 00:08:23.725 6225.920 - 6251.126: 10.5668% ( 320) 00:08:23.725 6251.126 - 6276.332: 12.6939% ( 373) 00:08:23.725 6276.332 - 6301.538: 14.6841% ( 349) 00:08:23.725 6301.538 - 6326.745: 16.6857% ( 351) 00:08:23.725 6326.745 - 6351.951: 18.7272% ( 358) 00:08:23.725 6351.951 - 6377.157: 20.8143% ( 366) 00:08:23.725 6377.157 - 6402.363: 22.9072% ( 367) 00:08:23.725 6402.363 - 6427.569: 25.0000% ( 367) 00:08:23.725 6427.569 - 6452.775: 27.1156% ( 371) 00:08:23.725 6452.775 - 6503.188: 31.3869% ( 749) 00:08:23.725 6503.188 - 6553.600: 35.6809% ( 753) 00:08:23.725 6553.600 - 6604.012: 39.8894% ( 738) 00:08:23.725 6604.012 - 6654.425: 44.0922% ( 737) 00:08:23.725 6654.425 - 6704.837: 48.2721% ( 733) 00:08:23.725 6704.837 - 6755.249: 52.6004% ( 759) 00:08:23.725 6755.249 - 6805.662: 56.8031% ( 737) 00:08:23.725 6805.662 - 6856.074: 61.0972% ( 753) 00:08:23.725 6856.074 - 6906.486: 65.3684% ( 749) 00:08:23.725 6906.486 - 6956.898: 69.6624% ( 753) 00:08:23.725 6956.898 - 7007.311: 73.1809% ( 617) 00:08:23.725 7007.311 - 7057.723: 75.4904% ( 405) 00:08:23.725 7057.723 - 7108.135: 76.6766% ( 208) 00:08:23.725 7108.135 - 7158.548: 77.4350% ( 133) 00:08:23.725 7158.548 - 7208.960: 77.9368% ( 88) 00:08:23.725 7208.960 - 7259.372: 78.2391% ( 53) 00:08:23.725 7259.372 - 7309.785: 78.3987% ( 28) 00:08:23.725 7309.785 - 7360.197: 78.5185% ( 21) 00:08:23.725 7360.197 - 7410.609: 78.6211% ( 18) 00:08:23.725 7410.609 - 7461.022: 78.7181% ( 17) 00:08:23.725 7461.022 - 7511.434: 78.8207% ( 18) 00:08:23.725 7511.434 - 7561.846: 78.9177% ( 17) 00:08:23.725 7561.846 - 7612.258: 78.9975% ( 14) 00:08:23.725 7612.258 - 7662.671: 79.0602% ( 11) 00:08:23.725 7662.671 - 7713.083: 79.1172% ( 10) 00:08:23.725 7713.083 - 7763.495: 79.1572% ( 7) 00:08:23.725 7763.495 - 7813.908: 79.1800% ( 4) 00:08:23.725 7813.908 - 7864.320: 79.2085% ( 5) 00:08:23.725 7864.320 - 7914.732: 79.2427% ( 6) 00:08:23.725 7914.732 - 7965.145: 79.3111% ( 12) 00:08:23.725 7965.145 - 8015.557: 79.6020% ( 51) 00:08:23.725 8015.557 - 8065.969: 79.9954% ( 69) 00:08:23.725 8065.969 - 8116.382: 80.4573% ( 81) 00:08:23.725 8116.382 - 8166.794: 80.9135% ( 80) 00:08:23.725 8166.794 - 8217.206: 81.3013% ( 68) 00:08:23.725 8217.206 - 8267.618: 81.8830% ( 102) 00:08:23.725 8267.618 - 8318.031: 82.4133% ( 93) 00:08:23.725 8318.031 - 8368.443: 82.9665% ( 97) 00:08:23.725 8368.443 - 8418.855: 83.4968% ( 93) 00:08:23.725 8418.855 - 8469.268: 84.0842% ( 103) 00:08:23.725 8469.268 - 8519.680: 84.6259% ( 95) 00:08:23.725 8519.680 - 8570.092: 85.1962% ( 100) 00:08:23.725 8570.092 - 8620.505: 85.7379% ( 95) 00:08:23.725 8620.505 - 8670.917: 86.3196% ( 102) 00:08:23.725 8670.917 - 8721.329: 86.8841% ( 99) 00:08:23.725 8721.329 - 8771.742: 87.4658% ( 102) 00:08:23.725 8771.742 - 8822.154: 88.1045% ( 112) 00:08:23.725 8822.154 - 8872.566: 88.6690% ( 99) 00:08:23.725 8872.566 - 8922.978: 89.2735% ( 106) 00:08:23.725 8922.978 - 8973.391: 89.8837% ( 107) 00:08:23.725 8973.391 - 9023.803: 90.4824% ( 105) 00:08:23.725 9023.803 - 9074.215: 91.0983% ( 108) 00:08:23.725 9074.215 - 9124.628: 91.6629% ( 99) 00:08:23.725 9124.628 - 9175.040: 92.2844% ( 109) 00:08:23.725 9175.040 - 9225.452: 92.7635% ( 84) 00:08:23.725 9225.452 - 9275.865: 93.1284% ( 64) 00:08:23.725 9275.865 - 9326.277: 93.4592% ( 58) 00:08:23.725 9326.277 - 9376.689: 93.7044% ( 43) 00:08:23.725 9376.689 - 9427.102: 93.8812% ( 31) 00:08:23.725 9427.102 - 9477.514: 94.0579% ( 31) 00:08:23.725 9477.514 - 9527.926: 94.1948% ( 24) 00:08:23.725 9527.926 - 9578.338: 94.3146% ( 21) 00:08:23.725 9578.338 - 9628.751: 94.4001% ( 15) 00:08:23.725 9628.751 - 9679.163: 94.4970% ( 17) 00:08:23.725 9679.163 - 9729.575: 94.5826% ( 15) 00:08:23.725 9729.575 - 9779.988: 94.6510% ( 12) 00:08:23.725 9779.988 - 9830.400: 94.7251% ( 13) 00:08:23.725 9830.400 - 9880.812: 94.7993% ( 13) 00:08:23.725 9880.812 - 9931.225: 94.8677% ( 12) 00:08:23.725 9931.225 - 9981.637: 94.9818% ( 20) 00:08:23.725 9981.637 - 10032.049: 95.0616% ( 14) 00:08:23.725 10032.049 - 10082.462: 95.1528% ( 16) 00:08:23.725 10082.462 - 10132.874: 95.2384% ( 15) 00:08:23.725 10132.874 - 10183.286: 95.3353% ( 17) 00:08:23.725 10183.286 - 10233.698: 95.4380% ( 18) 00:08:23.725 10233.698 - 10284.111: 95.5292% ( 16) 00:08:23.725 10284.111 - 10334.523: 95.6261% ( 17) 00:08:23.725 10334.523 - 10384.935: 95.7060% ( 14) 00:08:23.725 10384.935 - 10435.348: 95.7687% ( 11) 00:08:23.725 10435.348 - 10485.760: 95.8086% ( 7) 00:08:23.725 10485.760 - 10536.172: 95.8485% ( 7) 00:08:23.725 10536.172 - 10586.585: 95.8885% ( 7) 00:08:23.725 10586.585 - 10636.997: 95.9284% ( 7) 00:08:23.725 10636.997 - 10687.409: 95.9683% ( 7) 00:08:23.725 10687.409 - 10737.822: 96.0139% ( 8) 00:08:23.725 10737.822 - 10788.234: 96.1052% ( 16) 00:08:23.725 10788.234 - 10838.646: 96.1451% ( 7) 00:08:23.725 10838.646 - 10889.058: 96.1736% ( 5) 00:08:23.725 10889.058 - 10939.471: 96.2078% ( 6) 00:08:23.725 10939.471 - 10989.883: 96.2420% ( 6) 00:08:23.725 10989.883 - 11040.295: 96.2705% ( 5) 00:08:23.725 11040.295 - 11090.708: 96.2876% ( 3) 00:08:23.725 11090.708 - 11141.120: 96.3104% ( 4) 00:08:23.725 11141.120 - 11191.532: 96.3447% ( 6) 00:08:23.725 11191.532 - 11241.945: 96.3903% ( 8) 00:08:23.725 11241.945 - 11292.357: 96.4302% ( 7) 00:08:23.725 11292.357 - 11342.769: 96.4758% ( 8) 00:08:23.725 11342.769 - 11393.182: 96.5157% ( 7) 00:08:23.725 11393.182 - 11443.594: 96.5557% ( 7) 00:08:23.725 11443.594 - 11494.006: 96.6013% ( 8) 00:08:23.725 11494.006 - 11544.418: 96.6412% ( 7) 00:08:23.725 11544.418 - 11594.831: 96.6868% ( 8) 00:08:23.725 11594.831 - 11645.243: 96.7153% ( 5) 00:08:23.725 11645.243 - 11695.655: 96.7609% ( 8) 00:08:23.725 11695.655 - 11746.068: 96.8066% ( 8) 00:08:23.725 11746.068 - 11796.480: 96.8465% ( 7) 00:08:23.725 11796.480 - 11846.892: 96.8921% ( 8) 00:08:23.725 11846.892 - 11897.305: 96.9320% ( 7) 00:08:23.725 11897.305 - 11947.717: 96.9719% ( 7) 00:08:23.725 11947.717 - 11998.129: 97.0176% ( 8) 00:08:23.725 11998.129 - 12048.542: 97.0746% ( 10) 00:08:23.725 12048.542 - 12098.954: 97.1145% ( 7) 00:08:23.725 12098.954 - 12149.366: 97.1601% ( 8) 00:08:23.725 12149.366 - 12199.778: 97.2057% ( 8) 00:08:23.726 12199.778 - 12250.191: 97.2457% ( 7) 00:08:23.726 12250.191 - 12300.603: 97.2742% ( 5) 00:08:23.726 12300.603 - 12351.015: 97.3084% ( 6) 00:08:23.726 12351.015 - 12401.428: 97.3312% ( 4) 00:08:23.726 12401.428 - 12451.840: 97.3483% ( 3) 00:08:23.726 12451.840 - 12502.252: 97.3825% ( 6) 00:08:23.726 12502.252 - 12552.665: 97.4167% ( 6) 00:08:23.726 12552.665 - 12603.077: 97.4510% ( 6) 00:08:23.726 12603.077 - 12653.489: 97.4909% ( 7) 00:08:23.726 12653.489 - 12703.902: 97.5308% ( 7) 00:08:23.726 12703.902 - 12754.314: 97.5707% ( 7) 00:08:23.726 12754.314 - 12804.726: 97.6163% ( 8) 00:08:23.726 12804.726 - 12855.138: 97.6562% ( 7) 00:08:23.726 12855.138 - 12905.551: 97.6962% ( 7) 00:08:23.726 12905.551 - 13006.375: 97.7532% ( 10) 00:08:23.726 13006.375 - 13107.200: 97.7931% ( 7) 00:08:23.726 13107.200 - 13208.025: 97.8387% ( 8) 00:08:23.726 13208.025 - 13308.849: 97.8729% ( 6) 00:08:23.726 13308.849 - 13409.674: 97.9129% ( 7) 00:08:23.726 13409.674 - 13510.498: 97.9585% ( 8) 00:08:23.726 13510.498 - 13611.323: 98.0269% ( 12) 00:08:23.726 13611.323 - 13712.148: 98.0839% ( 10) 00:08:23.726 13712.148 - 13812.972: 98.1182% ( 6) 00:08:23.726 13812.972 - 13913.797: 98.1581% ( 7) 00:08:23.726 13913.797 - 14014.622: 98.2037% ( 8) 00:08:23.726 14014.622 - 14115.446: 98.2436% ( 7) 00:08:23.726 14115.446 - 14216.271: 98.2835% ( 7) 00:08:23.726 14216.271 - 14317.095: 98.3292% ( 8) 00:08:23.726 14317.095 - 14417.920: 98.3577% ( 5) 00:08:23.726 14417.920 - 14518.745: 98.4033% ( 8) 00:08:23.726 14518.745 - 14619.569: 98.4489% ( 8) 00:08:23.726 14619.569 - 14720.394: 98.5230% ( 13) 00:08:23.726 14720.394 - 14821.218: 98.5744% ( 9) 00:08:23.726 14821.218 - 14922.043: 98.6314% ( 10) 00:08:23.726 14922.043 - 15022.868: 98.7112% ( 14) 00:08:23.726 15022.868 - 15123.692: 98.7625% ( 9) 00:08:23.726 15123.692 - 15224.517: 98.8253% ( 11) 00:08:23.726 15224.517 - 15325.342: 98.8823% ( 10) 00:08:23.726 15325.342 - 15426.166: 98.9222% ( 7) 00:08:23.726 15426.166 - 15526.991: 98.9621% ( 7) 00:08:23.726 15526.991 - 15627.815: 99.0021% ( 7) 00:08:23.726 15627.815 - 15728.640: 99.0363% ( 6) 00:08:23.726 15728.640 - 15829.465: 99.0762% ( 7) 00:08:23.726 15829.465 - 15930.289: 99.1161% ( 7) 00:08:23.726 15930.289 - 16031.114: 99.1503% ( 6) 00:08:23.726 16031.114 - 16131.938: 99.1845% ( 6) 00:08:23.726 16131.938 - 16232.763: 99.2245% ( 7) 00:08:23.726 16232.763 - 16333.588: 99.2530% ( 5) 00:08:23.726 16333.588 - 16434.412: 99.2701% ( 3) 00:08:23.726 20366.572 - 20467.397: 99.2758% ( 1) 00:08:23.726 20467.397 - 20568.222: 99.2986% ( 4) 00:08:23.726 20568.222 - 20669.046: 99.3271% ( 5) 00:08:23.726 20669.046 - 20769.871: 99.3556% ( 5) 00:08:23.726 20769.871 - 20870.695: 99.3727% ( 3) 00:08:23.726 20870.695 - 20971.520: 99.3955% ( 4) 00:08:23.726 20971.520 - 21072.345: 99.4240% ( 5) 00:08:23.726 21072.345 - 21173.169: 99.4469% ( 4) 00:08:23.726 21173.169 - 21273.994: 99.4697% ( 4) 00:08:23.726 21273.994 - 21374.818: 99.4925% ( 4) 00:08:23.726 21374.818 - 21475.643: 99.5210% ( 5) 00:08:23.726 21475.643 - 21576.468: 99.5438% ( 4) 00:08:23.726 21576.468 - 21677.292: 99.5723% ( 5) 00:08:23.726 21677.292 - 21778.117: 99.5951% ( 4) 00:08:23.726 21778.117 - 21878.942: 99.6236% ( 5) 00:08:23.726 21878.942 - 21979.766: 99.6350% ( 2) 00:08:23.726 26012.751 - 26214.400: 99.6750% ( 7) 00:08:23.726 26214.400 - 26416.049: 99.7263% ( 9) 00:08:23.726 26416.049 - 26617.698: 99.7776% ( 9) 00:08:23.726 26617.698 - 26819.348: 99.8289% ( 9) 00:08:23.726 26819.348 - 27020.997: 99.8745% ( 8) 00:08:23.726 27020.997 - 27222.646: 99.9259% ( 9) 00:08:23.726 27222.646 - 27424.295: 99.9715% ( 8) 00:08:23.726 27424.295 - 27625.945: 100.0000% ( 5) 00:08:23.726 00:08:23.726 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:23.726 ============================================================================== 00:08:23.726 Range in us Cumulative IO count 00:08:23.726 4738.757 - 4763.963: 0.0114% ( 2) 00:08:23.726 4763.963 - 4789.169: 0.0228% ( 2) 00:08:23.726 4789.169 - 4814.375: 0.0399% ( 3) 00:08:23.726 4814.375 - 4839.582: 0.0513% ( 2) 00:08:23.726 4839.582 - 4864.788: 0.0570% ( 1) 00:08:23.726 4864.788 - 4889.994: 0.0627% ( 1) 00:08:23.726 4889.994 - 4915.200: 0.0741% ( 2) 00:08:23.726 4915.200 - 4940.406: 0.0798% ( 1) 00:08:23.726 4940.406 - 4965.612: 0.0969% ( 3) 00:08:23.726 4965.612 - 4990.818: 0.1083% ( 2) 00:08:23.726 4990.818 - 5016.025: 0.1198% ( 2) 00:08:23.726 5016.025 - 5041.231: 0.1255% ( 1) 00:08:23.726 5041.231 - 5066.437: 0.1369% ( 2) 00:08:23.726 5066.437 - 5091.643: 0.1426% ( 1) 00:08:23.726 5091.643 - 5116.849: 0.1540% ( 2) 00:08:23.726 5116.849 - 5142.055: 0.1711% ( 3) 00:08:23.726 5167.262 - 5192.468: 0.1825% ( 2) 00:08:23.726 5192.468 - 5217.674: 0.1996% ( 3) 00:08:23.726 5217.674 - 5242.880: 0.2053% ( 1) 00:08:23.726 5242.880 - 5268.086: 0.2110% ( 1) 00:08:23.726 5268.086 - 5293.292: 0.2281% ( 3) 00:08:23.726 5293.292 - 5318.498: 0.2338% ( 1) 00:08:23.726 5318.498 - 5343.705: 0.2452% ( 2) 00:08:23.726 5343.705 - 5368.911: 0.2566% ( 2) 00:08:23.726 5368.911 - 5394.117: 0.2737% ( 3) 00:08:23.726 5394.117 - 5419.323: 0.2794% ( 1) 00:08:23.726 5419.323 - 5444.529: 0.2908% ( 2) 00:08:23.726 5444.529 - 5469.735: 0.3022% ( 2) 00:08:23.726 5469.735 - 5494.942: 0.3079% ( 1) 00:08:23.726 5494.942 - 5520.148: 0.3250% ( 3) 00:08:23.726 5520.148 - 5545.354: 0.3307% ( 1) 00:08:23.726 5545.354 - 5570.560: 0.3422% ( 2) 00:08:23.726 5570.560 - 5595.766: 0.3479% ( 1) 00:08:23.726 5595.766 - 5620.972: 0.3650% ( 3) 00:08:23.726 5948.652 - 5973.858: 0.3707% ( 1) 00:08:23.726 5973.858 - 5999.065: 0.4961% ( 22) 00:08:23.726 5999.065 - 6024.271: 0.9523% ( 80) 00:08:23.726 6024.271 - 6049.477: 1.7564% ( 141) 00:08:23.726 6049.477 - 6074.683: 2.9881% ( 216) 00:08:23.726 6074.683 - 6099.889: 4.5620% ( 276) 00:08:23.726 6099.889 - 6125.095: 6.3983% ( 322) 00:08:23.726 6125.095 - 6150.302: 8.1889% ( 314) 00:08:23.726 6150.302 - 6175.508: 9.9510% ( 309) 00:08:23.726 6175.508 - 6200.714: 11.6446% ( 297) 00:08:23.726 6200.714 - 6225.920: 13.2242% ( 277) 00:08:23.726 6225.920 - 6251.126: 14.8951% ( 293) 00:08:23.726 6251.126 - 6276.332: 16.6286% ( 304) 00:08:23.726 6276.332 - 6301.538: 18.2368% ( 282) 00:08:23.726 6301.538 - 6326.745: 19.9133% ( 294) 00:08:23.726 6326.745 - 6351.951: 21.6925% ( 312) 00:08:23.726 6351.951 - 6377.157: 23.4660% ( 311) 00:08:23.726 6377.157 - 6402.363: 25.2566% ( 314) 00:08:23.726 6402.363 - 6427.569: 27.0244% ( 310) 00:08:23.726 6427.569 - 6452.775: 28.8378% ( 318) 00:08:23.726 6452.775 - 6503.188: 32.5844% ( 657) 00:08:23.726 6503.188 - 6553.600: 36.0915% ( 615) 00:08:23.726 6553.600 - 6604.012: 39.6784% ( 629) 00:08:23.726 6604.012 - 6654.425: 43.3280% ( 640) 00:08:23.726 6654.425 - 6704.837: 46.9605% ( 637) 00:08:23.726 6704.837 - 6755.249: 50.4847% ( 618) 00:08:23.726 6755.249 - 6805.662: 54.0944% ( 633) 00:08:23.726 6805.662 - 6856.074: 57.8581% ( 660) 00:08:23.726 6856.074 - 6906.486: 61.5420% ( 646) 00:08:23.726 6906.486 - 6956.898: 65.2258% ( 646) 00:08:23.726 6956.898 - 7007.311: 68.8184% ( 630) 00:08:23.726 7007.311 - 7057.723: 72.3540% ( 620) 00:08:23.726 7057.723 - 7108.135: 74.9658% ( 458) 00:08:23.726 7108.135 - 7158.548: 76.4542% ( 261) 00:08:23.726 7158.548 - 7208.960: 77.2411% ( 138) 00:08:23.726 7208.960 - 7259.372: 77.8171% ( 101) 00:08:23.726 7259.372 - 7309.785: 78.1592% ( 60) 00:08:23.726 7309.785 - 7360.197: 78.4101% ( 44) 00:08:23.726 7360.197 - 7410.609: 78.5299% ( 21) 00:08:23.726 7410.609 - 7461.022: 78.6268% ( 17) 00:08:23.726 7461.022 - 7511.434: 78.7124% ( 15) 00:08:23.726 7511.434 - 7561.846: 78.7922% ( 14) 00:08:23.726 7561.846 - 7612.258: 78.8834% ( 16) 00:08:23.726 7612.258 - 7662.671: 78.9576% ( 13) 00:08:23.726 7662.671 - 7713.083: 79.0146% ( 10) 00:08:23.726 7713.083 - 7763.495: 79.0716% ( 10) 00:08:23.726 7763.495 - 7813.908: 79.1629% ( 16) 00:08:23.726 7813.908 - 7864.320: 79.3682% ( 36) 00:08:23.726 7864.320 - 7914.732: 79.7160% ( 61) 00:08:23.726 7914.732 - 7965.145: 80.1266% ( 72) 00:08:23.726 7965.145 - 8015.557: 80.4745% ( 61) 00:08:23.726 8015.557 - 8065.969: 80.8850% ( 72) 00:08:23.726 8065.969 - 8116.382: 81.3298% ( 78) 00:08:23.726 8116.382 - 8166.794: 81.8431% ( 90) 00:08:23.726 8166.794 - 8217.206: 82.2822% ( 77) 00:08:23.726 8217.206 - 8267.618: 82.8353% ( 97) 00:08:23.726 8267.618 - 8318.031: 83.2288% ( 69) 00:08:23.726 8318.031 - 8368.443: 83.7705% ( 95) 00:08:23.726 8368.443 - 8418.855: 84.2267% ( 80) 00:08:23.726 8418.855 - 8469.268: 84.7685% ( 95) 00:08:23.726 8469.268 - 8519.680: 85.2532% ( 85) 00:08:23.726 8519.680 - 8570.092: 85.7208% ( 82) 00:08:23.726 8570.092 - 8620.505: 86.1713% ( 79) 00:08:23.726 8620.505 - 8670.917: 86.7130% ( 95) 00:08:23.726 8670.917 - 8721.329: 87.1635% ( 79) 00:08:23.726 8721.329 - 8771.742: 87.6711% ( 89) 00:08:23.726 8771.742 - 8822.154: 88.1558% ( 85) 00:08:23.726 8822.154 - 8872.566: 88.6348% ( 84) 00:08:23.726 8872.566 - 8922.978: 89.1366% ( 88) 00:08:23.726 8922.978 - 8973.391: 89.6727% ( 94) 00:08:23.726 8973.391 - 9023.803: 90.1061% ( 76) 00:08:23.726 9023.803 - 9074.215: 90.6136% ( 89) 00:08:23.726 9074.215 - 9124.628: 91.1268% ( 90) 00:08:23.726 9124.628 - 9175.040: 91.6058% ( 84) 00:08:23.726 9175.040 - 9225.452: 92.1305% ( 92) 00:08:23.726 9225.452 - 9275.865: 92.6494% ( 91) 00:08:23.726 9275.865 - 9326.277: 93.1170% ( 82) 00:08:23.726 9326.277 - 9376.689: 93.4877% ( 65) 00:08:23.726 9376.689 - 9427.102: 93.7557% ( 47) 00:08:23.726 9427.102 - 9477.514: 94.0465% ( 51) 00:08:23.726 9477.514 - 9527.926: 94.2347% ( 33) 00:08:23.726 9527.926 - 9578.338: 94.4115% ( 31) 00:08:23.726 9578.338 - 9628.751: 94.5712% ( 28) 00:08:23.726 9628.751 - 9679.163: 94.6795% ( 19) 00:08:23.726 9679.163 - 9729.575: 94.7765% ( 17) 00:08:23.726 9729.575 - 9779.988: 94.8278% ( 9) 00:08:23.726 9779.988 - 9830.400: 94.8848% ( 10) 00:08:23.726 9830.400 - 9880.812: 94.9475% ( 11) 00:08:23.726 9880.812 - 9931.225: 95.0046% ( 10) 00:08:23.726 9931.225 - 9981.637: 95.0559% ( 9) 00:08:23.726 9981.637 - 10032.049: 95.1357% ( 14) 00:08:23.726 10032.049 - 10082.462: 95.1927% ( 10) 00:08:23.726 10082.462 - 10132.874: 95.2612% ( 12) 00:08:23.726 10132.874 - 10183.286: 95.3068% ( 8) 00:08:23.726 10183.286 - 10233.698: 95.3410% ( 6) 00:08:23.726 10233.698 - 10284.111: 95.3809% ( 7) 00:08:23.726 10284.111 - 10334.523: 95.4266% ( 8) 00:08:23.726 10334.523 - 10384.935: 95.4722% ( 8) 00:08:23.726 10384.935 - 10435.348: 95.5007% ( 5) 00:08:23.726 10435.348 - 10485.760: 95.5406% ( 7) 00:08:23.726 10485.760 - 10536.172: 95.5805% ( 7) 00:08:23.726 10536.172 - 10586.585: 95.6147% ( 6) 00:08:23.726 10586.585 - 10636.997: 95.6604% ( 8) 00:08:23.726 10636.997 - 10687.409: 95.7003% ( 7) 00:08:23.726 10687.409 - 10737.822: 95.7402% ( 7) 00:08:23.726 10737.822 - 10788.234: 95.7972% ( 10) 00:08:23.727 10788.234 - 10838.646: 95.8314% ( 6) 00:08:23.727 10838.646 - 10889.058: 95.8771% ( 8) 00:08:23.727 10889.058 - 10939.471: 95.9170% ( 7) 00:08:23.727 10939.471 - 10989.883: 95.9740% ( 10) 00:08:23.727 10989.883 - 11040.295: 96.0082% ( 6) 00:08:23.727 11040.295 - 11090.708: 96.0880% ( 14) 00:08:23.727 11090.708 - 11141.120: 96.1451% ( 10) 00:08:23.727 11141.120 - 11191.532: 96.2192% ( 13) 00:08:23.727 11191.532 - 11241.945: 96.2876% ( 12) 00:08:23.727 11241.945 - 11292.357: 96.3504% ( 11) 00:08:23.727 11292.357 - 11342.769: 96.4302% ( 14) 00:08:23.727 11342.769 - 11393.182: 96.5043% ( 13) 00:08:23.727 11393.182 - 11443.594: 96.5557% ( 9) 00:08:23.727 11443.594 - 11494.006: 96.6013% ( 8) 00:08:23.727 11494.006 - 11544.418: 96.6583% ( 10) 00:08:23.727 11544.418 - 11594.831: 96.7153% ( 10) 00:08:23.727 11594.831 - 11645.243: 96.7667% ( 9) 00:08:23.727 11645.243 - 11695.655: 96.8123% ( 8) 00:08:23.727 11695.655 - 11746.068: 96.8807% ( 12) 00:08:23.727 11746.068 - 11796.480: 96.9263% ( 8) 00:08:23.727 11796.480 - 11846.892: 96.9776% ( 9) 00:08:23.727 11846.892 - 11897.305: 97.0176% ( 7) 00:08:23.727 11897.305 - 11947.717: 97.0689% ( 9) 00:08:23.727 11947.717 - 11998.129: 97.1259% ( 10) 00:08:23.727 11998.129 - 12048.542: 97.1658% ( 7) 00:08:23.727 12048.542 - 12098.954: 97.2229% ( 10) 00:08:23.727 12098.954 - 12149.366: 97.2571% ( 6) 00:08:23.727 12149.366 - 12199.778: 97.2913% ( 6) 00:08:23.727 12199.778 - 12250.191: 97.3198% ( 5) 00:08:23.727 12250.191 - 12300.603: 97.3540% ( 6) 00:08:23.727 12300.603 - 12351.015: 97.3882% ( 6) 00:08:23.727 12351.015 - 12401.428: 97.4224% ( 6) 00:08:23.727 12401.428 - 12451.840: 97.4681% ( 8) 00:08:23.727 12451.840 - 12502.252: 97.5080% ( 7) 00:08:23.727 12502.252 - 12552.665: 97.5593% ( 9) 00:08:23.727 12552.665 - 12603.077: 97.5992% ( 7) 00:08:23.727 12603.077 - 12653.489: 97.6334% ( 6) 00:08:23.727 12653.489 - 12703.902: 97.6848% ( 9) 00:08:23.727 12703.902 - 12754.314: 97.7133% ( 5) 00:08:23.727 12754.314 - 12804.726: 97.7532% ( 7) 00:08:23.727 12804.726 - 12855.138: 97.7874% ( 6) 00:08:23.727 12855.138 - 12905.551: 97.8045% ( 3) 00:08:23.727 12905.551 - 13006.375: 97.8558% ( 9) 00:08:23.727 13006.375 - 13107.200: 97.9015% ( 8) 00:08:23.727 13107.200 - 13208.025: 97.9357% ( 6) 00:08:23.727 13208.025 - 13308.849: 97.9642% ( 5) 00:08:23.727 13308.849 - 13409.674: 97.9927% ( 5) 00:08:23.727 13409.674 - 13510.498: 98.0383% ( 8) 00:08:23.727 13510.498 - 13611.323: 98.0668% ( 5) 00:08:23.727 13611.323 - 13712.148: 98.1010% ( 6) 00:08:23.727 13712.148 - 13812.972: 98.1467% ( 8) 00:08:23.727 13812.972 - 13913.797: 98.2094% ( 11) 00:08:23.727 13913.797 - 14014.622: 98.2607% ( 9) 00:08:23.727 14014.622 - 14115.446: 98.3120% ( 9) 00:08:23.727 14115.446 - 14216.271: 98.3634% ( 9) 00:08:23.727 14216.271 - 14317.095: 98.4204% ( 10) 00:08:23.727 14317.095 - 14417.920: 98.4660% ( 8) 00:08:23.727 14417.920 - 14518.745: 98.5116% ( 8) 00:08:23.727 14518.745 - 14619.569: 98.5458% ( 6) 00:08:23.727 14619.569 - 14720.394: 98.5801% ( 6) 00:08:23.727 14720.394 - 14821.218: 98.6257% ( 8) 00:08:23.727 14821.218 - 14922.043: 98.6485% ( 4) 00:08:23.727 14922.043 - 15022.868: 98.6941% ( 8) 00:08:23.727 15022.868 - 15123.692: 98.7568% ( 11) 00:08:23.727 15123.692 - 15224.517: 98.8196% ( 11) 00:08:23.727 15224.517 - 15325.342: 98.8595% ( 7) 00:08:23.727 15325.342 - 15426.166: 98.8994% ( 7) 00:08:23.727 15426.166 - 15526.991: 98.9450% ( 8) 00:08:23.727 15526.991 - 15627.815: 98.9792% ( 6) 00:08:23.727 15627.815 - 15728.640: 99.0078% ( 5) 00:08:23.727 15728.640 - 15829.465: 99.0363% ( 5) 00:08:23.727 15829.465 - 15930.289: 99.0591% ( 4) 00:08:23.727 15930.289 - 16031.114: 99.0819% ( 4) 00:08:23.727 16031.114 - 16131.938: 99.0933% ( 2) 00:08:23.727 16131.938 - 16232.763: 99.1047% ( 2) 00:08:23.727 16232.763 - 16333.588: 99.1218% ( 3) 00:08:23.727 16333.588 - 16434.412: 99.1389% ( 3) 00:08:23.727 16434.412 - 16535.237: 99.1560% ( 3) 00:08:23.727 16535.237 - 16636.062: 99.1674% ( 2) 00:08:23.727 16636.062 - 16736.886: 99.1845% ( 3) 00:08:23.727 16736.886 - 16837.711: 99.2016% ( 3) 00:08:23.727 16837.711 - 16938.535: 99.2130% ( 2) 00:08:23.727 16938.535 - 17039.360: 99.2302% ( 3) 00:08:23.727 17039.360 - 17140.185: 99.2473% ( 3) 00:08:23.727 17140.185 - 17241.009: 99.2644% ( 3) 00:08:23.727 17241.009 - 17341.834: 99.2701% ( 1) 00:08:23.727 19660.800 - 19761.625: 99.2986% ( 5) 00:08:23.727 19761.625 - 19862.449: 99.3214% ( 4) 00:08:23.727 19862.449 - 19963.274: 99.3385% ( 3) 00:08:23.727 19963.274 - 20064.098: 99.3556% ( 3) 00:08:23.727 20064.098 - 20164.923: 99.3784% ( 4) 00:08:23.727 20164.923 - 20265.748: 99.4012% ( 4) 00:08:23.727 20265.748 - 20366.572: 99.4240% ( 4) 00:08:23.727 20366.572 - 20467.397: 99.4469% ( 4) 00:08:23.727 20467.397 - 20568.222: 99.4640% ( 3) 00:08:23.727 20568.222 - 20669.046: 99.4925% ( 5) 00:08:23.727 20669.046 - 20769.871: 99.5096% ( 3) 00:08:23.727 20769.871 - 20870.695: 99.5324% ( 4) 00:08:23.727 20870.695 - 20971.520: 99.5552% ( 4) 00:08:23.727 20971.520 - 21072.345: 99.5780% ( 4) 00:08:23.727 21072.345 - 21173.169: 99.6065% ( 5) 00:08:23.727 21173.169 - 21273.994: 99.6236% ( 3) 00:08:23.727 21273.994 - 21374.818: 99.6350% ( 2) 00:08:23.727 24903.680 - 25004.505: 99.6407% ( 1) 00:08:23.727 25004.505 - 25105.329: 99.6578% ( 3) 00:08:23.727 25105.329 - 25206.154: 99.6807% ( 4) 00:08:23.727 25206.154 - 25306.978: 99.6921% ( 2) 00:08:23.727 25306.978 - 25407.803: 99.7206% ( 5) 00:08:23.727 25407.803 - 25508.628: 99.7377% ( 3) 00:08:23.727 25508.628 - 25609.452: 99.7548% ( 3) 00:08:23.727 25609.452 - 25710.277: 99.7833% ( 5) 00:08:23.727 25710.277 - 25811.102: 99.8004% ( 3) 00:08:23.727 25811.102 - 26012.751: 99.8403% ( 7) 00:08:23.727 26012.751 - 26214.400: 99.8859% ( 8) 00:08:23.727 26214.400 - 26416.049: 99.9316% ( 8) 00:08:23.727 26416.049 - 26617.698: 99.9829% ( 9) 00:08:23.727 26617.698 - 26819.348: 100.0000% ( 3) 00:08:23.727 00:08:23.727 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:23.727 ============================================================================== 00:08:23.727 Range in us Cumulative IO count 00:08:23.727 4587.520 - 4612.726: 0.0057% ( 1) 00:08:23.727 4612.726 - 4637.932: 0.0285% ( 4) 00:08:23.727 4637.932 - 4663.138: 0.0456% ( 3) 00:08:23.727 4688.345 - 4713.551: 0.0570% ( 2) 00:08:23.727 4713.551 - 4738.757: 0.0684% ( 2) 00:08:23.727 4738.757 - 4763.963: 0.0855% ( 3) 00:08:23.727 4763.963 - 4789.169: 0.0969% ( 2) 00:08:23.727 4789.169 - 4814.375: 0.1083% ( 2) 00:08:23.727 4814.375 - 4839.582: 0.1198% ( 2) 00:08:23.727 4839.582 - 4864.788: 0.1312% ( 2) 00:08:23.727 4864.788 - 4889.994: 0.1426% ( 2) 00:08:23.727 4889.994 - 4915.200: 0.1540% ( 2) 00:08:23.727 4915.200 - 4940.406: 0.1654% ( 2) 00:08:23.727 4940.406 - 4965.612: 0.1768% ( 2) 00:08:23.727 4965.612 - 4990.818: 0.1882% ( 2) 00:08:23.727 4990.818 - 5016.025: 0.2053% ( 3) 00:08:23.727 5016.025 - 5041.231: 0.2167% ( 2) 00:08:23.727 5041.231 - 5066.437: 0.2281% ( 2) 00:08:23.727 5066.437 - 5091.643: 0.2395% ( 2) 00:08:23.727 5091.643 - 5116.849: 0.2509% ( 2) 00:08:23.727 5116.849 - 5142.055: 0.2680% ( 3) 00:08:23.727 5142.055 - 5167.262: 0.2794% ( 2) 00:08:23.727 5167.262 - 5192.468: 0.2908% ( 2) 00:08:23.727 5192.468 - 5217.674: 0.3022% ( 2) 00:08:23.727 5217.674 - 5242.880: 0.3136% ( 2) 00:08:23.727 5242.880 - 5268.086: 0.3193% ( 1) 00:08:23.727 5268.086 - 5293.292: 0.3365% ( 3) 00:08:23.727 5293.292 - 5318.498: 0.3479% ( 2) 00:08:23.727 5318.498 - 5343.705: 0.3593% ( 2) 00:08:23.727 5343.705 - 5368.911: 0.3650% ( 1) 00:08:23.727 6049.477 - 6074.683: 0.3878% ( 4) 00:08:23.727 6074.683 - 6099.889: 0.7584% ( 65) 00:08:23.727 6099.889 - 6125.095: 1.5853% ( 145) 00:08:23.727 6125.095 - 6150.302: 2.6232% ( 182) 00:08:23.727 6150.302 - 6175.508: 4.4366% ( 318) 00:08:23.727 6175.508 - 6200.714: 6.7575% ( 407) 00:08:23.727 6200.714 - 6225.920: 8.9986% ( 393) 00:08:23.727 6225.920 - 6251.126: 11.0002% ( 351) 00:08:23.727 6251.126 - 6276.332: 13.0360% ( 357) 00:08:23.727 6276.332 - 6301.538: 15.0719% ( 357) 00:08:23.727 6301.538 - 6326.745: 17.0335% ( 344) 00:08:23.727 6326.745 - 6351.951: 18.9211% ( 331) 00:08:23.727 6351.951 - 6377.157: 20.8999% ( 347) 00:08:23.727 6377.157 - 6402.363: 23.0155% ( 371) 00:08:23.727 6402.363 - 6427.569: 25.0741% ( 361) 00:08:23.727 6427.569 - 6452.775: 27.2012% ( 373) 00:08:23.727 6452.775 - 6503.188: 31.4553% ( 746) 00:08:23.727 6503.188 - 6553.600: 35.6353% ( 733) 00:08:23.727 6553.600 - 6604.012: 39.9635% ( 759) 00:08:23.727 6604.012 - 6654.425: 44.1264% ( 730) 00:08:23.727 6654.425 - 6704.837: 48.3520% ( 741) 00:08:23.727 6704.837 - 6755.249: 52.5547% ( 737) 00:08:23.727 6755.249 - 6805.662: 56.8203% ( 748) 00:08:23.727 6805.662 - 6856.074: 61.0972% ( 750) 00:08:23.727 6856.074 - 6906.486: 65.3342% ( 743) 00:08:23.727 6906.486 - 6956.898: 69.5655% ( 742) 00:08:23.727 6956.898 - 7007.311: 73.0839% ( 617) 00:08:23.727 7007.311 - 7057.723: 75.3193% ( 392) 00:08:23.727 7057.723 - 7108.135: 76.4599% ( 200) 00:08:23.727 7108.135 - 7158.548: 77.2126% ( 132) 00:08:23.727 7158.548 - 7208.960: 77.6688% ( 80) 00:08:23.727 7208.960 - 7259.372: 78.0052% ( 59) 00:08:23.727 7259.372 - 7309.785: 78.1877% ( 32) 00:08:23.727 7309.785 - 7360.197: 78.3474% ( 28) 00:08:23.727 7360.197 - 7410.609: 78.4615% ( 20) 00:08:23.727 7410.609 - 7461.022: 78.6097% ( 26) 00:08:23.727 7461.022 - 7511.434: 78.7067% ( 17) 00:08:23.727 7511.434 - 7561.846: 78.8264% ( 21) 00:08:23.727 7561.846 - 7612.258: 78.9348% ( 19) 00:08:23.727 7612.258 - 7662.671: 79.0488% ( 20) 00:08:23.727 7662.671 - 7713.083: 79.1743% ( 22) 00:08:23.727 7713.083 - 7763.495: 79.2655% ( 16) 00:08:23.727 7763.495 - 7813.908: 79.3510% ( 15) 00:08:23.727 7813.908 - 7864.320: 79.4252% ( 13) 00:08:23.727 7864.320 - 7914.732: 79.4993% ( 13) 00:08:23.727 7914.732 - 7965.145: 79.6191% ( 21) 00:08:23.727 7965.145 - 8015.557: 79.9840% ( 64) 00:08:23.727 8015.557 - 8065.969: 80.3661% ( 67) 00:08:23.727 8065.969 - 8116.382: 80.9135% ( 96) 00:08:23.727 8116.382 - 8166.794: 81.3469% ( 76) 00:08:23.727 8166.794 - 8217.206: 81.8260% ( 84) 00:08:23.727 8217.206 - 8267.618: 82.4133% ( 103) 00:08:23.727 8267.618 - 8318.031: 82.9893% ( 101) 00:08:23.727 8318.031 - 8368.443: 83.5995% ( 107) 00:08:23.727 8368.443 - 8418.855: 84.1469% ( 96) 00:08:23.727 8418.855 - 8469.268: 84.7400% ( 104) 00:08:23.727 8469.268 - 8519.680: 85.2760% ( 94) 00:08:23.728 8519.680 - 8570.092: 85.8577% ( 102) 00:08:23.728 8570.092 - 8620.505: 86.3994% ( 95) 00:08:23.728 8620.505 - 8670.917: 86.9526% ( 97) 00:08:23.728 8670.917 - 8721.329: 87.5342% ( 102) 00:08:23.728 8721.329 - 8771.742: 88.0817% ( 96) 00:08:23.728 8771.742 - 8822.154: 88.6690% ( 103) 00:08:23.728 8822.154 - 8872.566: 89.1937% ( 92) 00:08:23.728 8872.566 - 8922.978: 89.8038% ( 107) 00:08:23.728 8922.978 - 8973.391: 90.3741% ( 100) 00:08:23.728 8973.391 - 9023.803: 90.9557% ( 102) 00:08:23.728 9023.803 - 9074.215: 91.5203% ( 99) 00:08:23.728 9074.215 - 9124.628: 92.0792% ( 98) 00:08:23.728 9124.628 - 9175.040: 92.6494% ( 100) 00:08:23.728 9175.040 - 9225.452: 93.0828% ( 76) 00:08:23.728 9225.452 - 9275.865: 93.4535% ( 65) 00:08:23.728 9275.865 - 9326.277: 93.7215% ( 47) 00:08:23.728 9326.277 - 9376.689: 93.9496% ( 40) 00:08:23.728 9376.689 - 9427.102: 94.1321% ( 32) 00:08:23.728 9427.102 - 9477.514: 94.2803% ( 26) 00:08:23.728 9477.514 - 9527.926: 94.3659% ( 15) 00:08:23.728 9527.926 - 9578.338: 94.4685% ( 18) 00:08:23.728 9578.338 - 9628.751: 94.5712% ( 18) 00:08:23.728 9628.751 - 9679.163: 94.6453% ( 13) 00:08:23.728 9679.163 - 9729.575: 94.7023% ( 10) 00:08:23.728 9729.575 - 9779.988: 94.7822% ( 14) 00:08:23.728 9779.988 - 9830.400: 94.8620% ( 14) 00:08:23.728 9830.400 - 9880.812: 94.9019% ( 7) 00:08:23.728 9880.812 - 9931.225: 94.9361% ( 6) 00:08:23.728 9931.225 - 9981.637: 94.9818% ( 8) 00:08:23.728 9981.637 - 10032.049: 95.0160% ( 6) 00:08:23.728 10032.049 - 10082.462: 95.0559% ( 7) 00:08:23.728 10082.462 - 10132.874: 95.1015% ( 8) 00:08:23.728 10132.874 - 10183.286: 95.1528% ( 9) 00:08:23.728 10183.286 - 10233.698: 95.1927% ( 7) 00:08:23.728 10233.698 - 10284.111: 95.2384% ( 8) 00:08:23.728 10284.111 - 10334.523: 95.2783% ( 7) 00:08:23.728 10334.523 - 10384.935: 95.3182% ( 7) 00:08:23.728 10384.935 - 10435.348: 95.3638% ( 8) 00:08:23.728 10435.348 - 10485.760: 95.3923% ( 5) 00:08:23.728 10485.760 - 10536.172: 95.4151% ( 4) 00:08:23.728 10536.172 - 10586.585: 95.4380% ( 4) 00:08:23.728 10586.585 - 10636.997: 95.4665% ( 5) 00:08:23.728 10636.997 - 10687.409: 95.4836% ( 3) 00:08:23.728 10687.409 - 10737.822: 95.5064% ( 4) 00:08:23.728 10737.822 - 10788.234: 95.5349% ( 5) 00:08:23.728 10788.234 - 10838.646: 95.5919% ( 10) 00:08:23.728 10838.646 - 10889.058: 95.6432% ( 9) 00:08:23.728 10889.058 - 10939.471: 95.7117% ( 12) 00:08:23.728 10939.471 - 10989.883: 95.7858% ( 13) 00:08:23.728 10989.883 - 11040.295: 95.8428% ( 10) 00:08:23.728 11040.295 - 11090.708: 95.9341% ( 16) 00:08:23.728 11090.708 - 11141.120: 96.0139% ( 14) 00:08:23.728 11141.120 - 11191.532: 96.0995% ( 15) 00:08:23.728 11191.532 - 11241.945: 96.2078% ( 19) 00:08:23.728 11241.945 - 11292.357: 96.3104% ( 18) 00:08:23.728 11292.357 - 11342.769: 96.3789% ( 12) 00:08:23.728 11342.769 - 11393.182: 96.4473% ( 12) 00:08:23.728 11393.182 - 11443.594: 96.5157% ( 12) 00:08:23.728 11443.594 - 11494.006: 96.5785% ( 11) 00:08:23.728 11494.006 - 11544.418: 96.6583% ( 14) 00:08:23.728 11544.418 - 11594.831: 96.7552% ( 17) 00:08:23.728 11594.831 - 11645.243: 96.8636% ( 19) 00:08:23.728 11645.243 - 11695.655: 96.9548% ( 16) 00:08:23.728 11695.655 - 11746.068: 97.0575% ( 18) 00:08:23.728 11746.068 - 11796.480: 97.1601% ( 18) 00:08:23.728 11796.480 - 11846.892: 97.2514% ( 16) 00:08:23.728 11846.892 - 11897.305: 97.3312% ( 14) 00:08:23.728 11897.305 - 11947.717: 97.4167% ( 15) 00:08:23.728 11947.717 - 11998.129: 97.4909% ( 13) 00:08:23.728 11998.129 - 12048.542: 97.5365% ( 8) 00:08:23.728 12048.542 - 12098.954: 97.5878% ( 9) 00:08:23.728 12098.954 - 12149.366: 97.6334% ( 8) 00:08:23.728 12149.366 - 12199.778: 97.6620% ( 5) 00:08:23.728 12199.778 - 12250.191: 97.6905% ( 5) 00:08:23.728 12250.191 - 12300.603: 97.7133% ( 4) 00:08:23.728 12300.603 - 12351.015: 97.7304% ( 3) 00:08:23.728 12351.015 - 12401.428: 97.7532% ( 4) 00:08:23.728 12401.428 - 12451.840: 97.7760% ( 4) 00:08:23.728 12451.840 - 12502.252: 97.7931% ( 3) 00:08:23.728 12502.252 - 12552.665: 97.8159% ( 4) 00:08:23.728 12552.665 - 12603.077: 97.8387% ( 4) 00:08:23.728 12603.077 - 12653.489: 97.8558% ( 3) 00:08:23.728 12653.489 - 12703.902: 97.8729% ( 3) 00:08:23.728 12703.902 - 12754.314: 97.8901% ( 3) 00:08:23.728 12754.314 - 12804.726: 97.9129% ( 4) 00:08:23.728 12804.726 - 12855.138: 97.9300% ( 3) 00:08:23.728 12855.138 - 12905.551: 97.9528% ( 4) 00:08:23.728 12905.551 - 13006.375: 97.9870% ( 6) 00:08:23.728 13006.375 - 13107.200: 98.0326% ( 8) 00:08:23.728 13107.200 - 13208.025: 98.0668% ( 6) 00:08:23.728 13208.025 - 13308.849: 98.1068% ( 7) 00:08:23.728 13308.849 - 13409.674: 98.1638% ( 10) 00:08:23.728 13409.674 - 13510.498: 98.1923% ( 5) 00:08:23.728 13510.498 - 13611.323: 98.2322% ( 7) 00:08:23.728 13611.323 - 13712.148: 98.2607% ( 5) 00:08:23.728 13712.148 - 13812.972: 98.2835% ( 4) 00:08:23.728 13812.972 - 13913.797: 98.3006% ( 3) 00:08:23.728 13913.797 - 14014.622: 98.3234% ( 4) 00:08:23.728 14014.622 - 14115.446: 98.3463% ( 4) 00:08:23.728 14115.446 - 14216.271: 98.3634% ( 3) 00:08:23.728 14216.271 - 14317.095: 98.3862% ( 4) 00:08:23.728 14317.095 - 14417.920: 98.4090% ( 4) 00:08:23.728 14417.920 - 14518.745: 98.4261% ( 3) 00:08:23.728 14518.745 - 14619.569: 98.4660% ( 7) 00:08:23.728 14619.569 - 14720.394: 98.5059% ( 7) 00:08:23.728 14720.394 - 14821.218: 98.5516% ( 8) 00:08:23.728 14821.218 - 14922.043: 98.5858% ( 6) 00:08:23.728 14922.043 - 15022.868: 98.6314% ( 8) 00:08:23.728 15022.868 - 15123.692: 98.6656% ( 6) 00:08:23.728 15123.692 - 15224.517: 98.6827% ( 3) 00:08:23.728 15224.517 - 15325.342: 98.7055% ( 4) 00:08:23.728 15325.342 - 15426.166: 98.7283% ( 4) 00:08:23.728 15426.166 - 15526.991: 98.7454% ( 3) 00:08:23.728 15526.991 - 15627.815: 98.7682% ( 4) 00:08:23.728 15627.815 - 15728.640: 98.7854% ( 3) 00:08:23.728 15728.640 - 15829.465: 98.8082% ( 4) 00:08:23.728 15829.465 - 15930.289: 98.8310% ( 4) 00:08:23.728 15930.289 - 16031.114: 98.8538% ( 4) 00:08:23.728 16031.114 - 16131.938: 98.9108% ( 10) 00:08:23.728 16131.938 - 16232.763: 98.9678% ( 10) 00:08:23.728 16232.763 - 16333.588: 98.9849% ( 3) 00:08:23.728 16333.588 - 16434.412: 99.0021% ( 3) 00:08:23.728 16434.412 - 16535.237: 99.0192% ( 3) 00:08:23.728 16535.237 - 16636.062: 99.0363% ( 3) 00:08:23.728 16636.062 - 16736.886: 99.0591% ( 4) 00:08:23.728 16736.886 - 16837.711: 99.0762% ( 3) 00:08:23.728 16837.711 - 16938.535: 99.0819% ( 1) 00:08:23.728 16938.535 - 17039.360: 99.1047% ( 4) 00:08:23.728 17039.360 - 17140.185: 99.1275% ( 4) 00:08:23.728 17140.185 - 17241.009: 99.1560% ( 5) 00:08:23.728 17241.009 - 17341.834: 99.1902% ( 6) 00:08:23.728 17341.834 - 17442.658: 99.2245% ( 6) 00:08:23.728 17442.658 - 17543.483: 99.2587% ( 6) 00:08:23.728 17543.483 - 17644.308: 99.2701% ( 2) 00:08:23.728 18753.378 - 18854.203: 99.2929% ( 4) 00:08:23.728 18854.203 - 18955.028: 99.3157% ( 4) 00:08:23.728 18955.028 - 19055.852: 99.3442% ( 5) 00:08:23.728 19055.852 - 19156.677: 99.3670% ( 4) 00:08:23.728 19156.677 - 19257.502: 99.3898% ( 4) 00:08:23.728 19257.502 - 19358.326: 99.4126% ( 4) 00:08:23.728 19358.326 - 19459.151: 99.4411% ( 5) 00:08:23.728 19459.151 - 19559.975: 99.4640% ( 4) 00:08:23.728 19559.975 - 19660.800: 99.4811% ( 3) 00:08:23.728 19660.800 - 19761.625: 99.5096% ( 5) 00:08:23.728 19761.625 - 19862.449: 99.5324% ( 4) 00:08:23.728 19862.449 - 19963.274: 99.5609% ( 5) 00:08:23.728 19963.274 - 20064.098: 99.5837% ( 4) 00:08:23.728 20064.098 - 20164.923: 99.6065% ( 4) 00:08:23.728 20164.923 - 20265.748: 99.6293% ( 4) 00:08:23.728 20265.748 - 20366.572: 99.6350% ( 1) 00:08:23.728 23996.258 - 24097.083: 99.6407% ( 1) 00:08:23.728 24097.083 - 24197.908: 99.6578% ( 3) 00:08:23.728 24197.908 - 24298.732: 99.6864% ( 5) 00:08:23.728 24298.732 - 24399.557: 99.7035% ( 3) 00:08:23.728 24399.557 - 24500.382: 99.7263% ( 4) 00:08:23.728 24500.382 - 24601.206: 99.7548% ( 5) 00:08:23.728 24601.206 - 24702.031: 99.7776% ( 4) 00:08:23.728 24702.031 - 24802.855: 99.8061% ( 5) 00:08:23.728 24802.855 - 24903.680: 99.8289% ( 4) 00:08:23.728 24903.680 - 25004.505: 99.8517% ( 4) 00:08:23.728 25004.505 - 25105.329: 99.8745% ( 4) 00:08:23.728 25105.329 - 25206.154: 99.9031% ( 5) 00:08:23.728 25206.154 - 25306.978: 99.9259% ( 4) 00:08:23.728 25306.978 - 25407.803: 99.9544% ( 5) 00:08:23.728 25407.803 - 25508.628: 99.9772% ( 4) 00:08:23.728 25508.628 - 25609.452: 100.0000% ( 4) 00:08:23.728 00:08:23.728 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:23.728 ============================================================================== 00:08:23.728 Range in us Cumulative IO count 00:08:23.728 4058.191 - 4083.397: 0.0114% ( 2) 00:08:23.728 4083.397 - 4108.603: 0.0456% ( 6) 00:08:23.728 4108.603 - 4133.809: 0.0627% ( 3) 00:08:23.728 4133.809 - 4159.015: 0.0798% ( 3) 00:08:23.728 4159.015 - 4184.222: 0.0912% ( 2) 00:08:23.728 4184.222 - 4209.428: 0.0969% ( 1) 00:08:23.728 4209.428 - 4234.634: 0.1141% ( 3) 00:08:23.728 4234.634 - 4259.840: 0.1198% ( 1) 00:08:23.728 4259.840 - 4285.046: 0.1312% ( 2) 00:08:23.728 4285.046 - 4310.252: 0.1426% ( 2) 00:08:23.728 4310.252 - 4335.458: 0.1540% ( 2) 00:08:23.728 4335.458 - 4360.665: 0.1654% ( 2) 00:08:23.728 4360.665 - 4385.871: 0.1825% ( 3) 00:08:23.728 4385.871 - 4411.077: 0.1939% ( 2) 00:08:23.728 4411.077 - 4436.283: 0.2110% ( 3) 00:08:23.728 4436.283 - 4461.489: 0.2167% ( 1) 00:08:23.728 4461.489 - 4486.695: 0.2281% ( 2) 00:08:23.728 4486.695 - 4511.902: 0.2395% ( 2) 00:08:23.728 4511.902 - 4537.108: 0.2509% ( 2) 00:08:23.728 4537.108 - 4562.314: 0.2623% ( 2) 00:08:23.728 4562.314 - 4587.520: 0.2737% ( 2) 00:08:23.728 4587.520 - 4612.726: 0.2851% ( 2) 00:08:23.729 4612.726 - 4637.932: 0.2965% ( 2) 00:08:23.729 4637.932 - 4663.138: 0.3079% ( 2) 00:08:23.729 4663.138 - 4688.345: 0.3193% ( 2) 00:08:23.729 4688.345 - 4713.551: 0.3307% ( 2) 00:08:23.729 4713.551 - 4738.757: 0.3479% ( 3) 00:08:23.729 4738.757 - 4763.963: 0.3593% ( 2) 00:08:23.729 4763.963 - 4789.169: 0.3650% ( 1) 00:08:23.729 5671.385 - 5696.591: 0.3764% ( 2) 00:08:23.729 5696.591 - 5721.797: 0.3935% ( 3) 00:08:23.729 5721.797 - 5747.003: 0.3992% ( 1) 00:08:23.729 5747.003 - 5772.209: 0.4106% ( 2) 00:08:23.729 5772.209 - 5797.415: 0.4220% ( 2) 00:08:23.729 5797.415 - 5822.622: 0.4334% ( 2) 00:08:23.729 5822.622 - 5847.828: 0.4505% ( 3) 00:08:23.729 5847.828 - 5873.034: 0.4676% ( 3) 00:08:23.729 5873.034 - 5898.240: 0.4790% ( 2) 00:08:23.729 5898.240 - 5923.446: 0.4904% ( 2) 00:08:23.729 5923.446 - 5948.652: 0.5018% ( 2) 00:08:23.729 5948.652 - 5973.858: 0.5132% ( 2) 00:08:23.729 5973.858 - 5999.065: 0.5246% ( 2) 00:08:23.729 5999.065 - 6024.271: 0.5360% ( 2) 00:08:23.729 6024.271 - 6049.477: 0.5988% ( 11) 00:08:23.729 6049.477 - 6074.683: 0.6786% ( 14) 00:08:23.729 6074.683 - 6099.889: 0.9067% ( 40) 00:08:23.729 6099.889 - 6125.095: 1.4827% ( 101) 00:08:23.729 6125.095 - 6150.302: 2.5262% ( 183) 00:08:23.729 6150.302 - 6175.508: 4.3225% ( 315) 00:08:23.729 6175.508 - 6200.714: 6.1930% ( 328) 00:08:23.729 6200.714 - 6225.920: 8.3200% ( 373) 00:08:23.729 6225.920 - 6251.126: 10.8463% ( 443) 00:08:23.729 6251.126 - 6276.332: 13.2755% ( 426) 00:08:23.729 6276.332 - 6301.538: 15.4995% ( 390) 00:08:23.729 6301.538 - 6326.745: 17.6722% ( 381) 00:08:23.729 6326.745 - 6351.951: 19.8278% ( 378) 00:08:23.729 6351.951 - 6377.157: 21.8351% ( 352) 00:08:23.729 6377.157 - 6402.363: 23.9108% ( 364) 00:08:23.729 6402.363 - 6427.569: 25.8269% ( 336) 00:08:23.729 6427.569 - 6452.775: 27.8000% ( 346) 00:08:23.729 6452.775 - 6503.188: 31.8659% ( 713) 00:08:23.729 6503.188 - 6553.600: 35.9489% ( 716) 00:08:23.729 6553.600 - 6604.012: 40.1403% ( 735) 00:08:23.729 6604.012 - 6654.425: 44.3716% ( 742) 00:08:23.729 6654.425 - 6704.837: 48.5858% ( 739) 00:08:23.729 6704.837 - 6755.249: 52.8228% ( 743) 00:08:23.729 6755.249 - 6805.662: 57.1567% ( 760) 00:08:23.729 6805.662 - 6856.074: 61.4678% ( 756) 00:08:23.729 6856.074 - 6906.486: 65.7505% ( 751) 00:08:23.729 6906.486 - 6956.898: 69.8620% ( 721) 00:08:23.729 6956.898 - 7007.311: 73.4432% ( 628) 00:08:23.729 7007.311 - 7057.723: 75.6387% ( 385) 00:08:23.729 7057.723 - 7108.135: 76.8647% ( 215) 00:08:23.729 7108.135 - 7158.548: 77.5376% ( 118) 00:08:23.729 7158.548 - 7208.960: 78.0109% ( 83) 00:08:23.729 7208.960 - 7259.372: 78.3246% ( 55) 00:08:23.729 7259.372 - 7309.785: 78.5242% ( 35) 00:08:23.729 7309.785 - 7360.197: 78.6439% ( 21) 00:08:23.729 7360.197 - 7410.609: 78.7637% ( 21) 00:08:23.729 7410.609 - 7461.022: 78.8435% ( 14) 00:08:23.729 7461.022 - 7511.434: 78.9177% ( 13) 00:08:23.729 7511.434 - 7561.846: 78.9804% ( 11) 00:08:23.729 7561.846 - 7612.258: 79.0488% ( 12) 00:08:23.729 7612.258 - 7662.671: 79.1229% ( 13) 00:08:23.729 7662.671 - 7713.083: 79.1686% ( 8) 00:08:23.729 7713.083 - 7763.495: 79.2142% ( 8) 00:08:23.729 7763.495 - 7813.908: 79.2427% ( 5) 00:08:23.729 7813.908 - 7864.320: 79.2940% ( 9) 00:08:23.729 7864.320 - 7914.732: 79.3739% ( 14) 00:08:23.729 7914.732 - 7965.145: 79.5050% ( 23) 00:08:23.729 7965.145 - 8015.557: 79.7331% ( 40) 00:08:23.729 8015.557 - 8065.969: 80.1323% ( 70) 00:08:23.729 8065.969 - 8116.382: 80.7026% ( 100) 00:08:23.729 8116.382 - 8166.794: 81.2101% ( 89) 00:08:23.729 8166.794 - 8217.206: 81.6720% ( 81) 00:08:23.729 8217.206 - 8267.618: 82.2879% ( 108) 00:08:23.729 8267.618 - 8318.031: 83.0178% ( 128) 00:08:23.729 8318.031 - 8368.443: 83.7135% ( 122) 00:08:23.729 8368.443 - 8418.855: 84.3123% ( 105) 00:08:23.729 8418.855 - 8469.268: 84.8711% ( 98) 00:08:23.729 8469.268 - 8519.680: 85.4927% ( 109) 00:08:23.729 8519.680 - 8570.092: 86.0801% ( 103) 00:08:23.729 8570.092 - 8620.505: 86.6617% ( 102) 00:08:23.729 8620.505 - 8670.917: 87.2434% ( 102) 00:08:23.729 8670.917 - 8721.329: 87.7908% ( 96) 00:08:23.729 8721.329 - 8771.742: 88.4010% ( 107) 00:08:23.729 8771.742 - 8822.154: 88.9370% ( 94) 00:08:23.729 8822.154 - 8872.566: 89.5358% ( 105) 00:08:23.729 8872.566 - 8922.978: 90.1118% ( 101) 00:08:23.729 8922.978 - 8973.391: 90.6763% ( 99) 00:08:23.729 8973.391 - 9023.803: 91.2637% ( 103) 00:08:23.729 9023.803 - 9074.215: 91.8054% ( 95) 00:08:23.729 9074.215 - 9124.628: 92.3586% ( 97) 00:08:23.729 9124.628 - 9175.040: 92.9003% ( 95) 00:08:23.729 9175.040 - 9225.452: 93.3394% ( 77) 00:08:23.729 9225.452 - 9275.865: 93.6645% ( 57) 00:08:23.729 9275.865 - 9326.277: 93.8926% ( 40) 00:08:23.729 9326.277 - 9376.689: 94.0865% ( 34) 00:08:23.729 9376.689 - 9427.102: 94.2461% ( 28) 00:08:23.729 9427.102 - 9477.514: 94.3488% ( 18) 00:08:23.729 9477.514 - 9527.926: 94.4286% ( 14) 00:08:23.729 9527.926 - 9578.338: 94.4856% ( 10) 00:08:23.729 9578.338 - 9628.751: 94.5141% ( 5) 00:08:23.729 9628.751 - 9679.163: 94.5484% ( 6) 00:08:23.729 9679.163 - 9729.575: 94.5769% ( 5) 00:08:23.729 9729.575 - 9779.988: 94.5940% ( 3) 00:08:23.729 9779.988 - 9830.400: 94.6168% ( 4) 00:08:23.729 9830.400 - 9880.812: 94.6282% ( 2) 00:08:23.729 9880.812 - 9931.225: 94.6396% ( 2) 00:08:23.729 9931.225 - 9981.637: 94.6738% ( 6) 00:08:23.729 9981.637 - 10032.049: 94.7023% ( 5) 00:08:23.729 10032.049 - 10082.462: 94.7479% ( 8) 00:08:23.729 10082.462 - 10132.874: 94.7822% ( 6) 00:08:23.729 10132.874 - 10183.286: 94.8278% ( 8) 00:08:23.729 10183.286 - 10233.698: 94.8734% ( 8) 00:08:23.729 10233.698 - 10284.111: 94.9247% ( 9) 00:08:23.729 10284.111 - 10334.523: 94.9818% ( 10) 00:08:23.729 10334.523 - 10384.935: 95.0445% ( 11) 00:08:23.729 10384.935 - 10435.348: 95.1015% ( 10) 00:08:23.729 10435.348 - 10485.760: 95.1756% ( 13) 00:08:23.729 10485.760 - 10536.172: 95.2327% ( 10) 00:08:23.729 10536.172 - 10586.585: 95.2954% ( 11) 00:08:23.729 10586.585 - 10636.997: 95.3581% ( 11) 00:08:23.729 10636.997 - 10687.409: 95.4437% ( 15) 00:08:23.729 10687.409 - 10737.822: 95.5292% ( 15) 00:08:23.729 10737.822 - 10788.234: 95.6090% ( 14) 00:08:23.729 10788.234 - 10838.646: 95.6775% ( 12) 00:08:23.729 10838.646 - 10889.058: 95.7801% ( 18) 00:08:23.729 10889.058 - 10939.471: 95.8714% ( 16) 00:08:23.729 10939.471 - 10989.883: 95.9512% ( 14) 00:08:23.729 10989.883 - 11040.295: 96.0424% ( 16) 00:08:23.729 11040.295 - 11090.708: 96.1223% ( 14) 00:08:23.729 11090.708 - 11141.120: 96.2078% ( 15) 00:08:23.729 11141.120 - 11191.532: 96.2990% ( 16) 00:08:23.729 11191.532 - 11241.945: 96.3903% ( 16) 00:08:23.729 11241.945 - 11292.357: 96.4644% ( 13) 00:08:23.729 11292.357 - 11342.769: 96.5443% ( 14) 00:08:23.729 11342.769 - 11393.182: 96.6526% ( 19) 00:08:23.729 11393.182 - 11443.594: 96.7438% ( 16) 00:08:23.729 11443.594 - 11494.006: 96.8294% ( 15) 00:08:23.729 11494.006 - 11544.418: 96.9092% ( 14) 00:08:23.729 11544.418 - 11594.831: 96.9833% ( 13) 00:08:23.729 11594.831 - 11645.243: 97.0404% ( 10) 00:08:23.729 11645.243 - 11695.655: 97.0917% ( 9) 00:08:23.729 11695.655 - 11746.068: 97.1373% ( 8) 00:08:23.729 11746.068 - 11796.480: 97.1829% ( 8) 00:08:23.729 11796.480 - 11846.892: 97.2343% ( 9) 00:08:23.729 11846.892 - 11897.305: 97.3198% ( 15) 00:08:23.729 11897.305 - 11947.717: 97.3369% ( 3) 00:08:23.729 11947.717 - 11998.129: 97.3540% ( 3) 00:08:23.729 11998.129 - 12048.542: 97.3825% ( 5) 00:08:23.729 12048.542 - 12098.954: 97.4110% ( 5) 00:08:23.729 12098.954 - 12149.366: 97.4453% ( 6) 00:08:23.729 12149.366 - 12199.778: 97.4738% ( 5) 00:08:23.729 12199.778 - 12250.191: 97.5194% ( 8) 00:08:23.729 12250.191 - 12300.603: 97.5536% ( 6) 00:08:23.729 12300.603 - 12351.015: 97.5878% ( 6) 00:08:23.729 12351.015 - 12401.428: 97.6277% ( 7) 00:08:23.729 12401.428 - 12451.840: 97.6734% ( 8) 00:08:23.729 12451.840 - 12502.252: 97.7076% ( 6) 00:08:23.729 12502.252 - 12552.665: 97.7475% ( 7) 00:08:23.729 12552.665 - 12603.077: 97.7931% ( 8) 00:08:23.729 12603.077 - 12653.489: 97.8216% ( 5) 00:08:23.729 12653.489 - 12703.902: 97.8558% ( 6) 00:08:23.729 12703.902 - 12754.314: 97.8844% ( 5) 00:08:23.729 12754.314 - 12804.726: 97.9015% ( 3) 00:08:23.729 12804.726 - 12855.138: 97.9243% ( 4) 00:08:23.729 12855.138 - 12905.551: 97.9414% ( 3) 00:08:23.729 12905.551 - 13006.375: 97.9585% ( 3) 00:08:23.729 13006.375 - 13107.200: 97.9813% ( 4) 00:08:23.729 13107.200 - 13208.025: 97.9984% ( 3) 00:08:23.729 13208.025 - 13308.849: 98.0155% ( 3) 00:08:23.729 13308.849 - 13409.674: 98.0383% ( 4) 00:08:23.729 13409.674 - 13510.498: 98.0554% ( 3) 00:08:23.729 13510.498 - 13611.323: 98.1010% ( 8) 00:08:23.729 13611.323 - 13712.148: 98.1524% ( 9) 00:08:23.729 13712.148 - 13812.972: 98.2037% ( 9) 00:08:23.729 13812.972 - 13913.797: 98.2265% ( 4) 00:08:23.729 13913.797 - 14014.622: 98.2607% ( 6) 00:08:23.729 14014.622 - 14115.446: 98.3063% ( 8) 00:08:23.729 14115.446 - 14216.271: 98.3349% ( 5) 00:08:23.729 14216.271 - 14317.095: 98.3520% ( 3) 00:08:23.729 14317.095 - 14417.920: 98.3748% ( 4) 00:08:23.729 14417.920 - 14518.745: 98.3976% ( 4) 00:08:23.729 14518.745 - 14619.569: 98.4147% ( 3) 00:08:23.729 14619.569 - 14720.394: 98.4375% ( 4) 00:08:23.729 14720.394 - 14821.218: 98.4603% ( 4) 00:08:23.729 14821.218 - 14922.043: 98.4774% ( 3) 00:08:23.729 14922.043 - 15022.868: 98.5002% ( 4) 00:08:23.729 15022.868 - 15123.692: 98.5401% ( 7) 00:08:23.729 15123.692 - 15224.517: 98.5801% ( 7) 00:08:23.729 15224.517 - 15325.342: 98.5972% ( 3) 00:08:23.729 15325.342 - 15426.166: 98.6086% ( 2) 00:08:23.729 15426.166 - 15526.991: 98.6314% ( 4) 00:08:23.729 15526.991 - 15627.815: 98.6713% ( 7) 00:08:23.729 15627.815 - 15728.640: 98.7112% ( 7) 00:08:23.729 15728.640 - 15829.465: 98.7511% ( 7) 00:08:23.729 15829.465 - 15930.289: 98.7911% ( 7) 00:08:23.729 15930.289 - 16031.114: 98.8310% ( 7) 00:08:23.729 16031.114 - 16131.938: 98.8766% ( 8) 00:08:23.729 16131.938 - 16232.763: 98.9222% ( 8) 00:08:23.729 16232.763 - 16333.588: 98.9564% ( 6) 00:08:23.729 16333.588 - 16434.412: 98.9735% ( 3) 00:08:23.729 16434.412 - 16535.237: 99.0078% ( 6) 00:08:23.729 16535.237 - 16636.062: 99.0306% ( 4) 00:08:23.729 16636.062 - 16736.886: 99.0534% ( 4) 00:08:23.729 16736.886 - 16837.711: 99.0762% ( 4) 00:08:23.729 16837.711 - 16938.535: 99.0876% ( 2) 00:08:23.729 16938.535 - 17039.360: 99.0933% ( 1) 00:08:23.729 17140.185 - 17241.009: 99.1275% ( 6) 00:08:23.729 17241.009 - 17341.834: 99.1503% ( 4) 00:08:23.729 17341.834 - 17442.658: 99.1845% ( 6) 00:08:23.729 17442.658 - 17543.483: 99.2130% ( 5) 00:08:23.729 17543.483 - 17644.308: 99.2473% ( 6) 00:08:23.729 17644.308 - 17745.132: 99.2701% ( 4) 00:08:23.729 17845.957 - 17946.782: 99.2758% ( 1) 00:08:23.729 17946.782 - 18047.606: 99.3100% ( 6) 00:08:23.730 18047.606 - 18148.431: 99.3328% ( 4) 00:08:23.730 18148.431 - 18249.255: 99.3556% ( 4) 00:08:23.730 18249.255 - 18350.080: 99.3784% ( 4) 00:08:23.730 18350.080 - 18450.905: 99.4012% ( 4) 00:08:23.730 18450.905 - 18551.729: 99.4297% ( 5) 00:08:23.730 18551.729 - 18652.554: 99.4526% ( 4) 00:08:23.730 18652.554 - 18753.378: 99.4754% ( 4) 00:08:23.730 18753.378 - 18854.203: 99.4982% ( 4) 00:08:23.730 18854.203 - 18955.028: 99.5267% ( 5) 00:08:23.730 18955.028 - 19055.852: 99.5438% ( 3) 00:08:23.730 19055.852 - 19156.677: 99.5666% ( 4) 00:08:23.730 19156.677 - 19257.502: 99.5894% ( 4) 00:08:23.730 19257.502 - 19358.326: 99.6122% ( 4) 00:08:23.730 19358.326 - 19459.151: 99.6350% ( 4) 00:08:23.730 23290.486 - 23391.311: 99.6578% ( 4) 00:08:23.730 23391.311 - 23492.135: 99.6807% ( 4) 00:08:23.730 23492.135 - 23592.960: 99.7092% ( 5) 00:08:23.730 23592.960 - 23693.785: 99.7320% ( 4) 00:08:23.730 23693.785 - 23794.609: 99.7548% ( 4) 00:08:23.730 23794.609 - 23895.434: 99.7833% ( 5) 00:08:23.730 23895.434 - 23996.258: 99.8061% ( 4) 00:08:23.730 23996.258 - 24097.083: 99.8289% ( 4) 00:08:23.730 24097.083 - 24197.908: 99.8517% ( 4) 00:08:23.730 24197.908 - 24298.732: 99.8802% ( 5) 00:08:23.730 24298.732 - 24399.557: 99.9031% ( 4) 00:08:23.730 24399.557 - 24500.382: 99.9259% ( 4) 00:08:23.730 24500.382 - 24601.206: 99.9487% ( 4) 00:08:23.730 24601.206 - 24702.031: 99.9772% ( 5) 00:08:23.730 24702.031 - 24802.855: 100.0000% ( 4) 00:08:23.730 00:08:23.730 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:23.730 ============================================================================== 00:08:23.730 Range in us Cumulative IO count 00:08:23.730 3906.954 - 3932.160: 0.0513% ( 9) 00:08:23.730 3932.160 - 3957.366: 0.0912% ( 7) 00:08:23.730 3957.366 - 3982.572: 0.1026% ( 2) 00:08:23.730 3982.572 - 4007.778: 0.1141% ( 2) 00:08:23.730 4007.778 - 4032.985: 0.1198% ( 1) 00:08:23.730 4032.985 - 4058.191: 0.1312% ( 2) 00:08:23.730 4083.397 - 4108.603: 0.1369% ( 1) 00:08:23.730 4108.603 - 4133.809: 0.1540% ( 3) 00:08:23.730 4133.809 - 4159.015: 0.1597% ( 1) 00:08:23.730 4159.015 - 4184.222: 0.1768% ( 3) 00:08:23.730 4184.222 - 4209.428: 0.1882% ( 2) 00:08:23.730 4209.428 - 4234.634: 0.1996% ( 2) 00:08:23.730 4234.634 - 4259.840: 0.2110% ( 2) 00:08:23.730 4259.840 - 4285.046: 0.2167% ( 1) 00:08:23.730 4285.046 - 4310.252: 0.2281% ( 2) 00:08:23.730 4310.252 - 4335.458: 0.2395% ( 2) 00:08:23.730 4335.458 - 4360.665: 0.2509% ( 2) 00:08:23.730 4360.665 - 4385.871: 0.2623% ( 2) 00:08:23.730 4385.871 - 4411.077: 0.2737% ( 2) 00:08:23.730 4411.077 - 4436.283: 0.2794% ( 1) 00:08:23.730 4436.283 - 4461.489: 0.2908% ( 2) 00:08:23.730 4461.489 - 4486.695: 0.3022% ( 2) 00:08:23.730 4486.695 - 4511.902: 0.3136% ( 2) 00:08:23.730 4511.902 - 4537.108: 0.3250% ( 2) 00:08:23.730 4537.108 - 4562.314: 0.3365% ( 2) 00:08:23.730 4562.314 - 4587.520: 0.3479% ( 2) 00:08:23.730 4587.520 - 4612.726: 0.3650% ( 3) 00:08:23.730 5469.735 - 5494.942: 0.3707% ( 1) 00:08:23.730 5494.942 - 5520.148: 0.3935% ( 4) 00:08:23.730 5520.148 - 5545.354: 0.4163% ( 4) 00:08:23.730 5545.354 - 5570.560: 0.4277% ( 2) 00:08:23.730 5570.560 - 5595.766: 0.4391% ( 2) 00:08:23.730 5595.766 - 5620.972: 0.4505% ( 2) 00:08:23.730 5620.972 - 5646.178: 0.4619% ( 2) 00:08:23.730 5646.178 - 5671.385: 0.4733% ( 2) 00:08:23.730 5671.385 - 5696.591: 0.4790% ( 1) 00:08:23.730 5696.591 - 5721.797: 0.4847% ( 1) 00:08:23.730 5721.797 - 5747.003: 0.5018% ( 3) 00:08:23.730 5747.003 - 5772.209: 0.5189% ( 3) 00:08:23.730 5772.209 - 5797.415: 0.5303% ( 2) 00:08:23.730 5797.415 - 5822.622: 0.5531% ( 4) 00:08:23.730 5822.622 - 5847.828: 0.5646% ( 2) 00:08:23.730 5847.828 - 5873.034: 0.5760% ( 2) 00:08:23.730 5873.034 - 5898.240: 0.5874% ( 2) 00:08:23.730 5898.240 - 5923.446: 0.5931% ( 1) 00:08:23.730 5923.446 - 5948.652: 0.6045% ( 2) 00:08:23.730 5948.652 - 5973.858: 0.6159% ( 2) 00:08:23.730 5973.858 - 5999.065: 0.6273% ( 2) 00:08:23.730 5999.065 - 6024.271: 0.6558% ( 5) 00:08:23.730 6024.271 - 6049.477: 0.6900% ( 6) 00:08:23.730 6049.477 - 6074.683: 0.8155% ( 22) 00:08:23.730 6074.683 - 6099.889: 1.1291% ( 55) 00:08:23.730 6099.889 - 6125.095: 1.7678% ( 112) 00:08:23.730 6125.095 - 6150.302: 3.0395% ( 223) 00:08:23.730 6150.302 - 6175.508: 4.8757% ( 322) 00:08:23.730 6175.508 - 6200.714: 6.9514% ( 364) 00:08:23.730 6200.714 - 6225.920: 9.3351% ( 418) 00:08:23.730 6225.920 - 6251.126: 11.5933% ( 396) 00:08:23.730 6251.126 - 6276.332: 13.5778% ( 348) 00:08:23.730 6276.332 - 6301.538: 15.5851% ( 352) 00:08:23.730 6301.538 - 6326.745: 17.3073% ( 302) 00:08:23.730 6326.745 - 6351.951: 19.2347% ( 338) 00:08:23.730 6351.951 - 6377.157: 21.2363% ( 351) 00:08:23.730 6377.157 - 6402.363: 23.3748% ( 375) 00:08:23.730 6402.363 - 6427.569: 25.5474% ( 381) 00:08:23.730 6427.569 - 6452.775: 27.6688% ( 372) 00:08:23.730 6452.775 - 6503.188: 31.9001% ( 742) 00:08:23.730 6503.188 - 6553.600: 36.0687% ( 731) 00:08:23.730 6553.600 - 6604.012: 40.2657% ( 736) 00:08:23.730 6604.012 - 6654.425: 44.4913% ( 741) 00:08:23.730 6654.425 - 6704.837: 48.8139% ( 758) 00:08:23.730 6704.837 - 6755.249: 53.0452% ( 742) 00:08:23.730 6755.249 - 6805.662: 57.3620% ( 757) 00:08:23.730 6805.662 - 6856.074: 61.6161% ( 746) 00:08:23.730 6856.074 - 6906.486: 65.9272% ( 756) 00:08:23.730 6906.486 - 6956.898: 70.0388% ( 721) 00:08:23.730 6956.898 - 7007.311: 73.6485% ( 633) 00:08:23.730 7007.311 - 7057.723: 75.8326% ( 383) 00:08:23.730 7057.723 - 7108.135: 77.0586% ( 215) 00:08:23.730 7108.135 - 7158.548: 77.7600% ( 123) 00:08:23.730 7158.548 - 7208.960: 78.2448% ( 85) 00:08:23.730 7208.960 - 7259.372: 78.5185% ( 48) 00:08:23.730 7259.372 - 7309.785: 78.6839% ( 29) 00:08:23.730 7309.785 - 7360.197: 78.7979% ( 20) 00:08:23.730 7360.197 - 7410.609: 78.8663% ( 12) 00:08:23.730 7410.609 - 7461.022: 78.9633% ( 17) 00:08:23.730 7461.022 - 7511.434: 79.0317% ( 12) 00:08:23.730 7511.434 - 7561.846: 79.1001% ( 12) 00:08:23.730 7561.846 - 7612.258: 79.1800% ( 14) 00:08:23.730 7612.258 - 7662.671: 79.2256% ( 8) 00:08:23.730 7662.671 - 7713.083: 79.2883% ( 11) 00:08:23.730 7713.083 - 7763.495: 79.3396% ( 9) 00:08:23.730 7763.495 - 7813.908: 79.3796% ( 7) 00:08:23.730 7813.908 - 7864.320: 79.4195% ( 7) 00:08:23.730 7864.320 - 7914.732: 79.4480% ( 5) 00:08:23.730 7914.732 - 7965.145: 79.5164% ( 12) 00:08:23.730 7965.145 - 8015.557: 79.7901% ( 48) 00:08:23.730 8015.557 - 8065.969: 80.2806% ( 86) 00:08:23.730 8065.969 - 8116.382: 80.6911% ( 72) 00:08:23.730 8116.382 - 8166.794: 81.1474% ( 80) 00:08:23.730 8166.794 - 8217.206: 81.6663% ( 91) 00:08:23.730 8217.206 - 8267.618: 82.1111% ( 78) 00:08:23.730 8267.618 - 8318.031: 82.7441% ( 111) 00:08:23.730 8318.031 - 8368.443: 83.2972% ( 97) 00:08:23.730 8368.443 - 8418.855: 83.8561% ( 98) 00:08:23.730 8418.855 - 8469.268: 84.4035% ( 96) 00:08:23.730 8469.268 - 8519.680: 84.9396% ( 94) 00:08:23.730 8519.680 - 8570.092: 85.5497% ( 107) 00:08:23.730 8570.092 - 8620.505: 86.1200% ( 100) 00:08:23.730 8620.505 - 8670.917: 86.7130% ( 104) 00:08:23.730 8670.917 - 8721.329: 87.3631% ( 114) 00:08:23.730 8721.329 - 8771.742: 87.9619% ( 105) 00:08:23.730 8771.742 - 8822.154: 88.5151% ( 97) 00:08:23.730 8822.154 - 8872.566: 89.1252% ( 107) 00:08:23.730 8872.566 - 8922.978: 89.7240% ( 105) 00:08:23.730 8922.978 - 8973.391: 90.2771% ( 97) 00:08:23.730 8973.391 - 9023.803: 90.8417% ( 99) 00:08:23.730 9023.803 - 9074.215: 91.4062% ( 99) 00:08:23.730 9074.215 - 9124.628: 91.9993% ( 104) 00:08:23.730 9124.628 - 9175.040: 92.5297% ( 93) 00:08:23.730 9175.040 - 9225.452: 93.0144% ( 85) 00:08:23.730 9225.452 - 9275.865: 93.3736% ( 63) 00:08:23.730 9275.865 - 9326.277: 93.6359% ( 46) 00:08:23.730 9326.277 - 9376.689: 93.9097% ( 48) 00:08:23.730 9376.689 - 9427.102: 94.0865% ( 31) 00:08:23.730 9427.102 - 9477.514: 94.2347% ( 26) 00:08:23.730 9477.514 - 9527.926: 94.3431% ( 19) 00:08:23.730 9527.926 - 9578.338: 94.4058% ( 11) 00:08:23.730 9578.338 - 9628.751: 94.4457% ( 7) 00:08:23.730 9628.751 - 9679.163: 94.4913% ( 8) 00:08:23.730 9679.163 - 9729.575: 94.5198% ( 5) 00:08:23.730 9729.575 - 9779.988: 94.5769% ( 10) 00:08:23.730 9779.988 - 9830.400: 94.6282% ( 9) 00:08:23.730 9830.400 - 9880.812: 94.6795% ( 9) 00:08:23.730 9880.812 - 9931.225: 94.7308% ( 9) 00:08:23.730 9931.225 - 9981.637: 94.7708% ( 7) 00:08:23.730 9981.637 - 10032.049: 94.8107% ( 7) 00:08:23.730 10032.049 - 10082.462: 94.8620% ( 9) 00:08:23.730 10082.462 - 10132.874: 94.8962% ( 6) 00:08:23.730 10132.874 - 10183.286: 94.9418% ( 8) 00:08:23.730 10183.286 - 10233.698: 94.9875% ( 8) 00:08:23.730 10233.698 - 10284.111: 95.0502% ( 11) 00:08:23.730 10284.111 - 10334.523: 95.1129% ( 11) 00:08:23.730 10334.523 - 10384.935: 95.1870% ( 13) 00:08:23.730 10384.935 - 10435.348: 95.2555% ( 12) 00:08:23.730 10435.348 - 10485.760: 95.3296% ( 13) 00:08:23.730 10485.760 - 10536.172: 95.4551% ( 22) 00:08:23.730 10536.172 - 10586.585: 95.5577% ( 18) 00:08:23.730 10586.585 - 10636.997: 95.6661% ( 19) 00:08:23.730 10636.997 - 10687.409: 95.7573% ( 16) 00:08:23.730 10687.409 - 10737.822: 95.8428% ( 15) 00:08:23.730 10737.822 - 10788.234: 95.9056% ( 11) 00:08:23.730 10788.234 - 10838.646: 95.9740% ( 12) 00:08:23.730 10838.646 - 10889.058: 96.0424% ( 12) 00:08:23.730 10889.058 - 10939.471: 96.1223% ( 14) 00:08:23.730 10939.471 - 10989.883: 96.2021% ( 14) 00:08:23.730 10989.883 - 11040.295: 96.2876% ( 15) 00:08:23.730 11040.295 - 11090.708: 96.3732% ( 15) 00:08:23.730 11090.708 - 11141.120: 96.4530% ( 14) 00:08:23.730 11141.120 - 11191.532: 96.5328% ( 14) 00:08:23.730 11191.532 - 11241.945: 96.6241% ( 16) 00:08:23.730 11241.945 - 11292.357: 96.7667% ( 25) 00:08:23.730 11292.357 - 11342.769: 96.8294% ( 11) 00:08:23.730 11342.769 - 11393.182: 96.8579% ( 5) 00:08:23.730 11393.182 - 11443.594: 96.8864% ( 5) 00:08:23.730 11443.594 - 11494.006: 96.9149% ( 5) 00:08:23.730 11494.006 - 11544.418: 96.9377% ( 4) 00:08:23.730 11544.418 - 11594.831: 96.9776% ( 7) 00:08:23.730 11594.831 - 11645.243: 97.0176% ( 7) 00:08:23.730 11645.243 - 11695.655: 97.0575% ( 7) 00:08:23.730 11695.655 - 11746.068: 97.1088% ( 9) 00:08:23.730 11746.068 - 11796.480: 97.1601% ( 9) 00:08:23.730 11796.480 - 11846.892: 97.2000% ( 7) 00:08:23.730 11846.892 - 11897.305: 97.2400% ( 7) 00:08:23.730 11897.305 - 11947.717: 97.2799% ( 7) 00:08:23.730 11947.717 - 11998.129: 97.3084% ( 5) 00:08:23.730 11998.129 - 12048.542: 97.3483% ( 7) 00:08:23.730 12048.542 - 12098.954: 97.3882% ( 7) 00:08:23.730 12098.954 - 12149.366: 97.4167% ( 5) 00:08:23.730 12149.366 - 12199.778: 97.4510% ( 6) 00:08:23.730 12199.778 - 12250.191: 97.4852% ( 6) 00:08:23.730 12250.191 - 12300.603: 97.5137% ( 5) 00:08:23.730 12300.603 - 12351.015: 97.5422% ( 5) 00:08:23.730 12351.015 - 12401.428: 97.5707% ( 5) 00:08:23.730 12401.428 - 12451.840: 97.5992% ( 5) 00:08:23.730 12451.840 - 12502.252: 97.6277% ( 5) 00:08:23.730 12502.252 - 12552.665: 97.6620% ( 6) 00:08:23.731 12552.665 - 12603.077: 97.6791% ( 3) 00:08:23.731 12603.077 - 12653.489: 97.7019% ( 4) 00:08:23.731 12653.489 - 12703.902: 97.7190% ( 3) 00:08:23.731 12703.902 - 12754.314: 97.7418% ( 4) 00:08:23.731 12754.314 - 12804.726: 97.7589% ( 3) 00:08:23.731 12804.726 - 12855.138: 97.7817% ( 4) 00:08:23.731 12855.138 - 12905.551: 97.7988% ( 3) 00:08:23.731 12905.551 - 13006.375: 97.8273% ( 5) 00:08:23.731 13006.375 - 13107.200: 97.8729% ( 8) 00:08:23.731 13107.200 - 13208.025: 97.9129% ( 7) 00:08:23.731 13208.025 - 13308.849: 97.9471% ( 6) 00:08:23.731 13308.849 - 13409.674: 98.0041% ( 10) 00:08:23.731 13409.674 - 13510.498: 98.0383% ( 6) 00:08:23.731 13510.498 - 13611.323: 98.0782% ( 7) 00:08:23.731 13611.323 - 13712.148: 98.1239% ( 8) 00:08:23.731 13712.148 - 13812.972: 98.1581% ( 6) 00:08:23.731 13812.972 - 13913.797: 98.1980% ( 7) 00:08:23.731 13913.797 - 14014.622: 98.2322% ( 6) 00:08:23.731 14014.622 - 14115.446: 98.2778% ( 8) 00:08:23.731 14115.446 - 14216.271: 98.3177% ( 7) 00:08:23.731 14216.271 - 14317.095: 98.3520% ( 6) 00:08:23.731 14317.095 - 14417.920: 98.3862% ( 6) 00:08:23.731 14417.920 - 14518.745: 98.4318% ( 8) 00:08:23.731 14518.745 - 14619.569: 98.4660% ( 6) 00:08:23.731 14619.569 - 14720.394: 98.5059% ( 7) 00:08:23.731 14720.394 - 14821.218: 98.5344% ( 5) 00:08:23.731 14821.218 - 14922.043: 98.5401% ( 1) 00:08:23.731 15325.342 - 15426.166: 98.5573% ( 3) 00:08:23.731 15426.166 - 15526.991: 98.6542% ( 17) 00:08:23.731 15526.991 - 15627.815: 98.6770% ( 4) 00:08:23.731 15627.815 - 15728.640: 98.7340% ( 10) 00:08:23.731 15728.640 - 15829.465: 98.8082% ( 13) 00:08:23.731 15829.465 - 15930.289: 98.8823% ( 13) 00:08:23.731 15930.289 - 16031.114: 98.9279% ( 8) 00:08:23.731 16031.114 - 16131.938: 98.9849% ( 10) 00:08:23.731 16131.938 - 16232.763: 99.0420% ( 10) 00:08:23.731 16232.763 - 16333.588: 99.0648% ( 4) 00:08:23.731 16333.588 - 16434.412: 99.0819% ( 3) 00:08:23.731 16434.412 - 16535.237: 99.1047% ( 4) 00:08:23.731 16535.237 - 16636.062: 99.1275% ( 4) 00:08:23.731 16636.062 - 16736.886: 99.1446% ( 3) 00:08:23.731 16736.886 - 16837.711: 99.1674% ( 4) 00:08:23.731 16837.711 - 16938.535: 99.2245% ( 10) 00:08:23.731 16938.535 - 17039.360: 99.2587% ( 6) 00:08:23.731 17039.360 - 17140.185: 99.3043% ( 8) 00:08:23.731 17140.185 - 17241.009: 99.3499% ( 8) 00:08:23.731 17241.009 - 17341.834: 99.3898% ( 7) 00:08:23.731 17341.834 - 17442.658: 99.4240% ( 6) 00:08:23.731 17442.658 - 17543.483: 99.4469% ( 4) 00:08:23.731 17543.483 - 17644.308: 99.4697% ( 4) 00:08:23.731 17644.308 - 17745.132: 99.4982% ( 5) 00:08:23.731 17745.132 - 17845.957: 99.5210% ( 4) 00:08:23.731 17845.957 - 17946.782: 99.5438% ( 4) 00:08:23.731 17946.782 - 18047.606: 99.5552% ( 2) 00:08:23.731 18047.606 - 18148.431: 99.5780% ( 4) 00:08:23.731 18148.431 - 18249.255: 99.6008% ( 4) 00:08:23.731 18249.255 - 18350.080: 99.6236% ( 4) 00:08:23.731 18350.080 - 18450.905: 99.6350% ( 2) 00:08:23.731 22181.415 - 22282.240: 99.6521% ( 3) 00:08:23.731 22282.240 - 22383.065: 99.6750% ( 4) 00:08:23.731 22383.065 - 22483.889: 99.6921% ( 3) 00:08:23.731 22483.889 - 22584.714: 99.7206% ( 5) 00:08:23.731 22584.714 - 22685.538: 99.7434% ( 4) 00:08:23.731 22685.538 - 22786.363: 99.7662% ( 4) 00:08:23.731 22786.363 - 22887.188: 99.7890% ( 4) 00:08:23.731 22887.188 - 22988.012: 99.8118% ( 4) 00:08:23.731 22988.012 - 23088.837: 99.8346% ( 4) 00:08:23.731 23088.837 - 23189.662: 99.8574% ( 4) 00:08:23.731 23189.662 - 23290.486: 99.8802% ( 4) 00:08:23.731 23290.486 - 23391.311: 99.9088% ( 5) 00:08:23.731 23391.311 - 23492.135: 99.9316% ( 4) 00:08:23.731 23492.135 - 23592.960: 99.9544% ( 4) 00:08:23.731 23592.960 - 23693.785: 99.9829% ( 5) 00:08:23.731 23693.785 - 23794.609: 100.0000% ( 3) 00:08:23.731 00:08:23.731 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:23.731 ============================================================================== 00:08:23.731 Range in us Cumulative IO count 00:08:23.731 3629.686 - 3654.892: 0.0057% ( 1) 00:08:23.731 3654.892 - 3680.098: 0.0171% ( 2) 00:08:23.731 3680.098 - 3705.305: 0.0285% ( 2) 00:08:23.731 3705.305 - 3730.511: 0.0399% ( 2) 00:08:23.731 3730.511 - 3755.717: 0.0456% ( 1) 00:08:23.731 3755.717 - 3780.923: 0.0570% ( 2) 00:08:23.731 3780.923 - 3806.129: 0.0741% ( 3) 00:08:23.731 3806.129 - 3831.335: 0.0798% ( 1) 00:08:23.731 3831.335 - 3856.542: 0.0912% ( 2) 00:08:23.731 3856.542 - 3881.748: 0.1026% ( 2) 00:08:23.731 3881.748 - 3906.954: 0.1141% ( 2) 00:08:23.731 3906.954 - 3932.160: 0.1255% ( 2) 00:08:23.731 3932.160 - 3957.366: 0.1369% ( 2) 00:08:23.731 3957.366 - 3982.572: 0.1483% ( 2) 00:08:23.731 3982.572 - 4007.778: 0.1597% ( 2) 00:08:23.731 4007.778 - 4032.985: 0.1711% ( 2) 00:08:23.731 4032.985 - 4058.191: 0.1825% ( 2) 00:08:23.731 4058.191 - 4083.397: 0.1939% ( 2) 00:08:23.731 4083.397 - 4108.603: 0.1996% ( 1) 00:08:23.731 4108.603 - 4133.809: 0.2110% ( 2) 00:08:23.731 4133.809 - 4159.015: 0.2224% ( 2) 00:08:23.731 4159.015 - 4184.222: 0.2338% ( 2) 00:08:23.731 4184.222 - 4209.428: 0.2452% ( 2) 00:08:23.731 4209.428 - 4234.634: 0.2566% ( 2) 00:08:23.731 4234.634 - 4259.840: 0.2680% ( 2) 00:08:23.731 4259.840 - 4285.046: 0.2794% ( 2) 00:08:23.731 4285.046 - 4310.252: 0.2908% ( 2) 00:08:23.731 4310.252 - 4335.458: 0.3022% ( 2) 00:08:23.731 4335.458 - 4360.665: 0.3136% ( 2) 00:08:23.731 4360.665 - 4385.871: 0.3250% ( 2) 00:08:23.731 4385.871 - 4411.077: 0.3365% ( 2) 00:08:23.731 4411.077 - 4436.283: 0.3479% ( 2) 00:08:23.731 4436.283 - 4461.489: 0.3593% ( 2) 00:08:23.731 4461.489 - 4486.695: 0.3650% ( 1) 00:08:23.731 5293.292 - 5318.498: 0.3821% ( 3) 00:08:23.731 5318.498 - 5343.705: 0.4277% ( 8) 00:08:23.731 5343.705 - 5368.911: 0.4334% ( 1) 00:08:23.731 5368.911 - 5394.117: 0.4448% ( 2) 00:08:23.731 5394.117 - 5419.323: 0.4562% ( 2) 00:08:23.731 5419.323 - 5444.529: 0.4619% ( 1) 00:08:23.731 5444.529 - 5469.735: 0.4676% ( 1) 00:08:23.731 5469.735 - 5494.942: 0.4790% ( 2) 00:08:23.731 5494.942 - 5520.148: 0.4904% ( 2) 00:08:23.731 5520.148 - 5545.354: 0.5018% ( 2) 00:08:23.731 5545.354 - 5570.560: 0.5189% ( 3) 00:08:23.731 5570.560 - 5595.766: 0.5303% ( 2) 00:08:23.731 5595.766 - 5620.972: 0.5474% ( 3) 00:08:23.731 5620.972 - 5646.178: 0.5589% ( 2) 00:08:23.731 5646.178 - 5671.385: 0.5760% ( 3) 00:08:23.731 5671.385 - 5696.591: 0.5874% ( 2) 00:08:23.731 5696.591 - 5721.797: 0.5988% ( 2) 00:08:23.731 5721.797 - 5747.003: 0.6102% ( 2) 00:08:23.731 5747.003 - 5772.209: 0.6216% ( 2) 00:08:23.731 5772.209 - 5797.415: 0.6330% ( 2) 00:08:23.731 5797.415 - 5822.622: 0.6444% ( 2) 00:08:23.731 5822.622 - 5847.828: 0.6558% ( 2) 00:08:23.731 5847.828 - 5873.034: 0.6672% ( 2) 00:08:23.731 5873.034 - 5898.240: 0.6786% ( 2) 00:08:23.731 5898.240 - 5923.446: 0.6900% ( 2) 00:08:23.731 5923.446 - 5948.652: 0.7014% ( 2) 00:08:23.731 5948.652 - 5973.858: 0.7128% ( 2) 00:08:23.731 5973.858 - 5999.065: 0.7242% ( 2) 00:08:23.731 5999.065 - 6024.271: 0.7299% ( 1) 00:08:23.731 6024.271 - 6049.477: 0.7356% ( 1) 00:08:23.731 6049.477 - 6074.683: 0.8497% ( 20) 00:08:23.731 6074.683 - 6099.889: 1.0835% ( 41) 00:08:23.731 6099.889 - 6125.095: 1.8761% ( 139) 00:08:23.731 6125.095 - 6150.302: 3.0338% ( 203) 00:08:23.731 6150.302 - 6175.508: 4.7673% ( 304) 00:08:23.731 6175.508 - 6200.714: 6.9457% ( 382) 00:08:23.731 6200.714 - 6225.920: 9.3522% ( 422) 00:08:23.731 6225.920 - 6251.126: 11.4279% ( 364) 00:08:23.731 6251.126 - 6276.332: 13.4352% ( 352) 00:08:23.731 6276.332 - 6301.538: 15.5680% ( 374) 00:08:23.731 6301.538 - 6326.745: 17.7064% ( 375) 00:08:23.731 6326.745 - 6351.951: 19.6510% ( 341) 00:08:23.731 6351.951 - 6377.157: 21.5500% ( 333) 00:08:23.731 6377.157 - 6402.363: 23.5173% ( 345) 00:08:23.731 6402.363 - 6427.569: 25.5417% ( 355) 00:08:23.731 6427.569 - 6452.775: 27.6574% ( 371) 00:08:23.731 6452.775 - 6503.188: 31.8887% ( 742) 00:08:23.731 6503.188 - 6553.600: 36.0972% ( 738) 00:08:23.731 6553.600 - 6604.012: 40.3399% ( 744) 00:08:23.731 6604.012 - 6654.425: 44.4799% ( 726) 00:08:23.731 6654.425 - 6704.837: 48.6770% ( 736) 00:08:23.731 6704.837 - 6755.249: 52.8513% ( 732) 00:08:23.731 6755.249 - 6805.662: 57.0598% ( 738) 00:08:23.731 6805.662 - 6856.074: 61.3538% ( 753) 00:08:23.731 6856.074 - 6906.486: 65.6706% ( 757) 00:08:23.731 6906.486 - 6956.898: 69.9247% ( 746) 00:08:23.731 6956.898 - 7007.311: 73.4888% ( 625) 00:08:23.731 7007.311 - 7057.723: 75.8098% ( 407) 00:08:23.731 7057.723 - 7108.135: 77.0301% ( 214) 00:08:23.731 7108.135 - 7158.548: 77.7885% ( 133) 00:08:23.731 7158.548 - 7208.960: 78.2790% ( 86) 00:08:23.731 7208.960 - 7259.372: 78.5983% ( 56) 00:08:23.731 7259.372 - 7309.785: 78.7979% ( 35) 00:08:23.731 7309.785 - 7360.197: 78.9062% ( 19) 00:08:23.731 7360.197 - 7410.609: 78.9804% ( 13) 00:08:23.731 7410.609 - 7461.022: 79.0773% ( 17) 00:08:23.731 7461.022 - 7511.434: 79.1629% ( 15) 00:08:23.731 7511.434 - 7561.846: 79.2484% ( 15) 00:08:23.731 7561.846 - 7612.258: 79.3339% ( 15) 00:08:23.731 7612.258 - 7662.671: 79.4081% ( 13) 00:08:23.731 7662.671 - 7713.083: 79.4879% ( 14) 00:08:23.731 7713.083 - 7763.495: 79.5449% ( 10) 00:08:23.731 7763.495 - 7813.908: 79.5792% ( 6) 00:08:23.731 7813.908 - 7864.320: 79.6134% ( 6) 00:08:23.731 7864.320 - 7914.732: 79.6419% ( 5) 00:08:23.731 7914.732 - 7965.145: 79.7559% ( 20) 00:08:23.731 7965.145 - 8015.557: 80.0240% ( 47) 00:08:23.731 8015.557 - 8065.969: 80.3433% ( 56) 00:08:23.731 8065.969 - 8116.382: 80.7425% ( 70) 00:08:23.731 8116.382 - 8166.794: 81.1759% ( 76) 00:08:23.731 8166.794 - 8217.206: 81.5636% ( 68) 00:08:23.731 8217.206 - 8267.618: 82.1681% ( 106) 00:08:23.731 8267.618 - 8318.031: 82.7213% ( 97) 00:08:23.731 8318.031 - 8368.443: 83.2402% ( 91) 00:08:23.731 8368.443 - 8418.855: 83.8960% ( 115) 00:08:23.731 8418.855 - 8469.268: 84.5290% ( 111) 00:08:23.731 8469.268 - 8519.680: 85.0650% ( 94) 00:08:23.731 8519.680 - 8570.092: 85.5839% ( 91) 00:08:23.731 8570.092 - 8620.505: 86.1029% ( 91) 00:08:23.731 8620.505 - 8670.917: 86.6446% ( 95) 00:08:23.731 8670.917 - 8721.329: 87.1521% ( 89) 00:08:23.731 8721.329 - 8771.742: 87.7110% ( 98) 00:08:23.731 8771.742 - 8822.154: 88.2185% ( 89) 00:08:23.731 8822.154 - 8872.566: 88.8059% ( 103) 00:08:23.731 8872.566 - 8922.978: 89.3305% ( 92) 00:08:23.731 8922.978 - 8973.391: 89.9350% ( 106) 00:08:23.731 8973.391 - 9023.803: 90.4596% ( 92) 00:08:23.731 9023.803 - 9074.215: 91.0584% ( 105) 00:08:23.731 9074.215 - 9124.628: 91.6115% ( 97) 00:08:23.731 9124.628 - 9175.040: 92.2160% ( 106) 00:08:23.731 9175.040 - 9225.452: 92.6779% ( 81) 00:08:23.731 9225.452 - 9275.865: 93.0429% ( 64) 00:08:23.731 9275.865 - 9326.277: 93.3394% ( 52) 00:08:23.732 9326.277 - 9376.689: 93.5732% ( 41) 00:08:23.732 9376.689 - 9427.102: 93.8355% ( 46) 00:08:23.732 9427.102 - 9477.514: 93.9724% ( 24) 00:08:23.732 9477.514 - 9527.926: 94.0807% ( 19) 00:08:23.732 9527.926 - 9578.338: 94.1948% ( 20) 00:08:23.732 9578.338 - 9628.751: 94.2974% ( 18) 00:08:23.732 9628.751 - 9679.163: 94.3944% ( 17) 00:08:23.732 9679.163 - 9729.575: 94.4742% ( 14) 00:08:23.732 9729.575 - 9779.988: 94.5598% ( 15) 00:08:23.732 9779.988 - 9830.400: 94.6453% ( 15) 00:08:23.732 9830.400 - 9880.812: 94.7479% ( 18) 00:08:23.732 9880.812 - 9931.225: 94.8962% ( 26) 00:08:23.732 9931.225 - 9981.637: 94.9932% ( 17) 00:08:23.732 9981.637 - 10032.049: 95.0844% ( 16) 00:08:23.732 10032.049 - 10082.462: 95.1699% ( 15) 00:08:23.732 10082.462 - 10132.874: 95.2498% ( 14) 00:08:23.732 10132.874 - 10183.286: 95.3296% ( 14) 00:08:23.732 10183.286 - 10233.698: 95.4208% ( 16) 00:08:23.732 10233.698 - 10284.111: 95.5007% ( 14) 00:08:23.732 10284.111 - 10334.523: 95.5634% ( 11) 00:08:23.732 10334.523 - 10384.935: 95.6318% ( 12) 00:08:23.732 10384.935 - 10435.348: 95.7174% ( 15) 00:08:23.732 10435.348 - 10485.760: 95.7858% ( 12) 00:08:23.732 10485.760 - 10536.172: 95.8542% ( 12) 00:08:23.732 10536.172 - 10586.585: 95.9056% ( 9) 00:08:23.732 10586.585 - 10636.997: 95.9512% ( 8) 00:08:23.732 10636.997 - 10687.409: 96.0139% ( 11) 00:08:23.732 10687.409 - 10737.822: 96.0709% ( 10) 00:08:23.732 10737.822 - 10788.234: 96.1280% ( 10) 00:08:23.732 10788.234 - 10838.646: 96.2021% ( 13) 00:08:23.732 10838.646 - 10889.058: 96.2363% ( 6) 00:08:23.732 10889.058 - 10939.471: 96.2705% ( 6) 00:08:23.732 10939.471 - 10989.883: 96.3161% ( 8) 00:08:23.732 10989.883 - 11040.295: 96.3618% ( 8) 00:08:23.732 11040.295 - 11090.708: 96.3903% ( 5) 00:08:23.732 11090.708 - 11141.120: 96.4359% ( 8) 00:08:23.732 11141.120 - 11191.532: 96.4758% ( 7) 00:08:23.732 11191.532 - 11241.945: 96.5100% ( 6) 00:08:23.732 11241.945 - 11292.357: 96.5899% ( 14) 00:08:23.732 11292.357 - 11342.769: 96.6583% ( 12) 00:08:23.732 11342.769 - 11393.182: 96.6982% ( 7) 00:08:23.732 11393.182 - 11443.594: 96.7267% ( 5) 00:08:23.732 11443.594 - 11494.006: 96.7552% ( 5) 00:08:23.732 11494.006 - 11544.418: 96.7952% ( 7) 00:08:23.732 11544.418 - 11594.831: 96.8408% ( 8) 00:08:23.732 11594.831 - 11645.243: 96.8750% ( 6) 00:08:23.732 11645.243 - 11695.655: 96.9149% ( 7) 00:08:23.732 11695.655 - 11746.068: 96.9548% ( 7) 00:08:23.732 11746.068 - 11796.480: 96.9833% ( 5) 00:08:23.732 11796.480 - 11846.892: 97.0233% ( 7) 00:08:23.732 11846.892 - 11897.305: 97.0689% ( 8) 00:08:23.732 11897.305 - 11947.717: 97.0974% ( 5) 00:08:23.732 11947.717 - 11998.129: 97.1202% ( 4) 00:08:23.732 11998.129 - 12048.542: 97.1373% ( 3) 00:08:23.732 12048.542 - 12098.954: 97.2172% ( 14) 00:08:23.732 12098.954 - 12149.366: 97.2343% ( 3) 00:08:23.732 12149.366 - 12199.778: 97.2571% ( 4) 00:08:23.732 12199.778 - 12250.191: 97.3084% ( 9) 00:08:23.732 12250.191 - 12300.603: 97.3312% ( 4) 00:08:23.732 12300.603 - 12351.015: 97.3597% ( 5) 00:08:23.732 12351.015 - 12401.428: 97.3882% ( 5) 00:08:23.732 12401.428 - 12451.840: 97.4281% ( 7) 00:08:23.732 12451.840 - 12502.252: 97.4681% ( 7) 00:08:23.732 12502.252 - 12552.665: 97.5023% ( 6) 00:08:23.732 12552.665 - 12603.077: 97.5422% ( 7) 00:08:23.732 12603.077 - 12653.489: 97.5878% ( 8) 00:08:23.732 12653.489 - 12703.902: 97.6277% ( 7) 00:08:23.732 12703.902 - 12754.314: 97.6734% ( 8) 00:08:23.732 12754.314 - 12804.726: 97.7133% ( 7) 00:08:23.732 12804.726 - 12855.138: 97.7532% ( 7) 00:08:23.732 12855.138 - 12905.551: 97.7874% ( 6) 00:08:23.732 12905.551 - 13006.375: 97.8387% ( 9) 00:08:23.732 13006.375 - 13107.200: 97.8786% ( 7) 00:08:23.732 13107.200 - 13208.025: 97.9186% ( 7) 00:08:23.732 13208.025 - 13308.849: 97.9642% ( 8) 00:08:23.732 13308.849 - 13409.674: 97.9984% ( 6) 00:08:23.732 13409.674 - 13510.498: 98.0326% ( 6) 00:08:23.732 13510.498 - 13611.323: 98.0554% ( 4) 00:08:23.732 13611.323 - 13712.148: 98.0953% ( 7) 00:08:23.732 13712.148 - 13812.972: 98.1353% ( 7) 00:08:23.732 13812.972 - 13913.797: 98.1695% ( 6) 00:08:23.732 13913.797 - 14014.622: 98.2094% ( 7) 00:08:23.732 14014.622 - 14115.446: 98.2493% ( 7) 00:08:23.732 14115.446 - 14216.271: 98.2835% ( 6) 00:08:23.732 14216.271 - 14317.095: 98.3063% ( 4) 00:08:23.732 14317.095 - 14417.920: 98.3234% ( 3) 00:08:23.732 14417.920 - 14518.745: 98.3463% ( 4) 00:08:23.732 14518.745 - 14619.569: 98.4090% ( 11) 00:08:23.732 14619.569 - 14720.394: 98.4660% ( 10) 00:08:23.732 14720.394 - 14821.218: 98.4945% ( 5) 00:08:23.732 14821.218 - 14922.043: 98.5287% ( 6) 00:08:23.732 14922.043 - 15022.868: 98.5687% ( 7) 00:08:23.732 15022.868 - 15123.692: 98.6143% ( 8) 00:08:23.732 15123.692 - 15224.517: 98.6770% ( 11) 00:08:23.732 15224.517 - 15325.342: 98.7283% ( 9) 00:08:23.732 15325.342 - 15426.166: 98.7911% ( 11) 00:08:23.732 15426.166 - 15526.991: 98.8595% ( 12) 00:08:23.732 15526.991 - 15627.815: 98.9108% ( 9) 00:08:23.732 15627.815 - 15728.640: 98.9564% ( 8) 00:08:23.732 15728.640 - 15829.465: 98.9906% ( 6) 00:08:23.732 15829.465 - 15930.289: 99.0705% ( 14) 00:08:23.732 15930.289 - 16031.114: 99.1332% ( 11) 00:08:23.732 16031.114 - 16131.938: 99.1845% ( 9) 00:08:23.732 16131.938 - 16232.763: 99.2416% ( 10) 00:08:23.732 16232.763 - 16333.588: 99.2872% ( 8) 00:08:23.732 16333.588 - 16434.412: 99.3442% ( 10) 00:08:23.732 16434.412 - 16535.237: 99.3841% ( 7) 00:08:23.732 16535.237 - 16636.062: 99.4354% ( 9) 00:08:23.732 16636.062 - 16736.886: 99.4811% ( 8) 00:08:23.732 16736.886 - 16837.711: 99.5210% ( 7) 00:08:23.732 16837.711 - 16938.535: 99.5438% ( 4) 00:08:23.732 16938.535 - 17039.360: 99.5666% ( 4) 00:08:23.732 17039.360 - 17140.185: 99.5894% ( 4) 00:08:23.732 17140.185 - 17241.009: 99.6179% ( 5) 00:08:23.732 17241.009 - 17341.834: 99.6350% ( 3) 00:08:23.732 21072.345 - 21173.169: 99.6407% ( 1) 00:08:23.732 21173.169 - 21273.994: 99.6635% ( 4) 00:08:23.732 21273.994 - 21374.818: 99.6921% ( 5) 00:08:23.732 21374.818 - 21475.643: 99.7149% ( 4) 00:08:23.732 21475.643 - 21576.468: 99.7377% ( 4) 00:08:23.732 21576.468 - 21677.292: 99.7605% ( 4) 00:08:23.732 21677.292 - 21778.117: 99.7833% ( 4) 00:08:23.732 21778.117 - 21878.942: 99.8061% ( 4) 00:08:23.732 21878.942 - 21979.766: 99.8346% ( 5) 00:08:23.732 21979.766 - 22080.591: 99.8574% ( 4) 00:08:23.732 22080.591 - 22181.415: 99.8802% ( 4) 00:08:23.732 22181.415 - 22282.240: 99.9088% ( 5) 00:08:23.732 22282.240 - 22383.065: 99.9316% ( 4) 00:08:23.732 22383.065 - 22483.889: 99.9544% ( 4) 00:08:23.732 22483.889 - 22584.714: 99.9772% ( 4) 00:08:23.732 22584.714 - 22685.538: 100.0000% ( 4) 00:08:23.732 00:08:23.732 14:16:05 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:24.668 Initializing NVMe Controllers 00:08:24.668 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:24.668 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:24.668 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:24.668 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:24.668 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:24.668 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:24.668 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:24.668 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:24.668 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:24.668 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:24.668 Initialization complete. Launching workers. 00:08:24.668 ======================================================== 00:08:24.668 Latency(us) 00:08:24.668 Device Information : IOPS MiB/s Average min max 00:08:24.668 PCIE (0000:00:13.0) NSID 1 from core 0: 17974.58 210.64 7124.91 5769.51 23896.18 00:08:24.668 PCIE (0000:00:10.0) NSID 1 from core 0: 17974.58 210.64 7117.26 5487.30 23304.78 00:08:24.668 PCIE (0000:00:11.0) NSID 1 from core 0: 17974.58 210.64 7110.16 5035.72 21948.99 00:08:24.668 PCIE (0000:00:12.0) NSID 1 from core 0: 17974.58 210.64 7103.34 4422.54 21596.30 00:08:24.668 PCIE (0000:00:12.0) NSID 2 from core 0: 17974.58 210.64 7096.49 4023.19 20925.91 00:08:24.668 PCIE (0000:00:12.0) NSID 3 from core 0: 17974.58 210.64 7089.64 4044.43 20798.46 00:08:24.668 ======================================================== 00:08:24.668 Total : 107847.49 1263.84 7106.97 4023.19 23896.18 00:08:24.668 00:08:24.668 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:24.668 ================================================================================= 00:08:24.668 1.00000% : 6099.889us 00:08:24.668 10.00000% : 6301.538us 00:08:24.668 25.00000% : 6553.600us 00:08:24.668 50.00000% : 6805.662us 00:08:24.668 75.00000% : 7057.723us 00:08:24.668 90.00000% : 7914.732us 00:08:24.668 95.00000% : 9376.689us 00:08:24.668 98.00000% : 12703.902us 00:08:24.668 99.00000% : 14518.745us 00:08:24.668 99.50000% : 18047.606us 00:08:24.668 99.90000% : 23592.960us 00:08:24.668 99.99000% : 23895.434us 00:08:24.668 99.99900% : 23996.258us 00:08:24.668 99.99990% : 23996.258us 00:08:24.668 99.99999% : 23996.258us 00:08:24.668 00:08:24.668 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:24.668 ================================================================================= 00:08:24.668 1.00000% : 5923.446us 00:08:24.668 10.00000% : 6276.332us 00:08:24.668 25.00000% : 6503.188us 00:08:24.668 50.00000% : 6805.662us 00:08:24.668 75.00000% : 7108.135us 00:08:24.668 90.00000% : 7813.908us 00:08:24.668 95.00000% : 9729.575us 00:08:24.668 98.00000% : 12250.191us 00:08:24.668 99.00000% : 14518.745us 00:08:24.668 99.50000% : 18350.080us 00:08:24.668 99.90000% : 22988.012us 00:08:24.668 99.99000% : 23290.486us 00:08:24.668 99.99900% : 23391.311us 00:08:24.668 99.99990% : 23391.311us 00:08:24.668 99.99999% : 23391.311us 00:08:24.668 00:08:24.668 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:24.668 ================================================================================= 00:08:24.668 1.00000% : 6125.095us 00:08:24.668 10.00000% : 6301.538us 00:08:24.668 25.00000% : 6503.188us 00:08:24.668 50.00000% : 6805.662us 00:08:24.668 75.00000% : 7007.311us 00:08:24.668 90.00000% : 7813.908us 00:08:24.668 95.00000% : 9880.812us 00:08:24.668 98.00000% : 12250.191us 00:08:24.668 99.00000% : 14518.745us 00:08:24.668 99.50000% : 18148.431us 00:08:24.668 99.90000% : 21374.818us 00:08:24.668 99.99000% : 21979.766us 00:08:24.668 99.99900% : 21979.766us 00:08:24.668 99.99990% : 21979.766us 00:08:24.668 99.99999% : 21979.766us 00:08:24.668 00:08:24.668 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:24.668 ================================================================================= 00:08:24.668 1.00000% : 6074.683us 00:08:24.668 10.00000% : 6276.332us 00:08:24.668 25.00000% : 6503.188us 00:08:24.668 50.00000% : 6805.662us 00:08:24.668 75.00000% : 7057.723us 00:08:24.668 90.00000% : 7763.495us 00:08:24.668 95.00000% : 9830.400us 00:08:24.668 98.00000% : 12703.902us 00:08:24.668 99.00000% : 14014.622us 00:08:24.668 99.50000% : 17543.483us 00:08:24.668 99.90000% : 20870.695us 00:08:24.668 99.99000% : 21677.292us 00:08:24.668 99.99900% : 21677.292us 00:08:24.668 99.99990% : 21677.292us 00:08:24.668 99.99999% : 21677.292us 00:08:24.668 00:08:24.668 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:24.668 ================================================================================= 00:08:24.668 1.00000% : 6074.683us 00:08:24.668 10.00000% : 6301.538us 00:08:24.668 25.00000% : 6553.600us 00:08:24.668 50.00000% : 6805.662us 00:08:24.668 75.00000% : 7057.723us 00:08:24.668 90.00000% : 7914.732us 00:08:24.668 95.00000% : 9376.689us 00:08:24.668 98.00000% : 13006.375us 00:08:24.668 99.00000% : 14014.622us 00:08:24.668 99.50000% : 16837.711us 00:08:24.668 99.90000% : 20467.397us 00:08:24.668 99.99000% : 20971.520us 00:08:24.668 99.99900% : 20971.520us 00:08:24.669 99.99990% : 20971.520us 00:08:24.669 99.99999% : 20971.520us 00:08:24.669 00:08:24.669 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:24.669 ================================================================================= 00:08:24.669 1.00000% : 6074.683us 00:08:24.669 10.00000% : 6301.538us 00:08:24.669 25.00000% : 6553.600us 00:08:24.669 50.00000% : 6805.662us 00:08:24.669 75.00000% : 7057.723us 00:08:24.669 90.00000% : 7864.320us 00:08:24.669 95.00000% : 9275.865us 00:08:24.669 98.00000% : 12703.902us 00:08:24.669 99.00000% : 14115.446us 00:08:24.669 99.50000% : 16232.763us 00:08:24.669 99.90000% : 19660.800us 00:08:24.669 99.99000% : 20870.695us 00:08:24.669 99.99900% : 20870.695us 00:08:24.669 99.99990% : 20870.695us 00:08:24.669 99.99999% : 20870.695us 00:08:24.669 00:08:24.669 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:24.669 ============================================================================== 00:08:24.669 Range in us Cumulative IO count 00:08:24.669 5747.003 - 5772.209: 0.0056% ( 1) 00:08:24.669 5772.209 - 5797.415: 0.0167% ( 2) 00:08:24.669 5797.415 - 5822.622: 0.0278% ( 2) 00:08:24.669 5822.622 - 5847.828: 0.0334% ( 1) 00:08:24.669 5847.828 - 5873.034: 0.0500% ( 3) 00:08:24.669 5873.034 - 5898.240: 0.0778% ( 5) 00:08:24.669 5898.240 - 5923.446: 0.0890% ( 2) 00:08:24.669 5923.446 - 5948.652: 0.1223% ( 6) 00:08:24.669 5948.652 - 5973.858: 0.1557% ( 6) 00:08:24.669 5973.858 - 5999.065: 0.2169% ( 11) 00:08:24.669 5999.065 - 6024.271: 0.2780% ( 11) 00:08:24.669 6024.271 - 6049.477: 0.4504% ( 31) 00:08:24.669 6049.477 - 6074.683: 0.7673% ( 57) 00:08:24.669 6074.683 - 6099.889: 1.0843% ( 57) 00:08:24.669 6099.889 - 6125.095: 1.7015% ( 111) 00:08:24.669 6125.095 - 6150.302: 2.1575% ( 82) 00:08:24.669 6150.302 - 6175.508: 3.1472% ( 178) 00:08:24.669 6175.508 - 6200.714: 5.0489% ( 342) 00:08:24.669 6200.714 - 6225.920: 6.7004% ( 297) 00:08:24.669 6225.920 - 6251.126: 7.9738% ( 229) 00:08:24.669 6251.126 - 6276.332: 9.3750% ( 252) 00:08:24.669 6276.332 - 6301.538: 10.8763% ( 270) 00:08:24.669 6301.538 - 6326.745: 12.6335% ( 316) 00:08:24.669 6326.745 - 6351.951: 14.4239% ( 322) 00:08:24.669 6351.951 - 6377.157: 15.7473% ( 238) 00:08:24.669 6377.157 - 6402.363: 17.3932% ( 296) 00:08:24.669 6402.363 - 6427.569: 19.3783% ( 357) 00:08:24.669 6427.569 - 6452.775: 21.4468% ( 372) 00:08:24.669 6452.775 - 6503.188: 24.6775% ( 581) 00:08:24.669 6503.188 - 6553.600: 28.5532% ( 697) 00:08:24.669 6553.600 - 6604.012: 32.1730% ( 651) 00:08:24.669 6604.012 - 6654.425: 37.4444% ( 948) 00:08:24.669 6654.425 - 6704.837: 42.8325% ( 969) 00:08:24.669 6704.837 - 6755.249: 47.5867% ( 855) 00:08:24.669 6755.249 - 6805.662: 53.2640% ( 1021) 00:08:24.669 6805.662 - 6856.074: 59.6308% ( 1145) 00:08:24.669 6856.074 - 6906.486: 64.7409% ( 919) 00:08:24.669 6906.486 - 6956.898: 69.5396% ( 863) 00:08:24.669 6956.898 - 7007.311: 73.8545% ( 776) 00:08:24.669 7007.311 - 7057.723: 77.8025% ( 710) 00:08:24.669 7057.723 - 7108.135: 80.0823% ( 410) 00:08:24.669 7108.135 - 7158.548: 82.1174% ( 366) 00:08:24.669 7158.548 - 7208.960: 83.7745% ( 298) 00:08:24.669 7208.960 - 7259.372: 84.7420% ( 174) 00:08:24.669 7259.372 - 7309.785: 85.6650% ( 166) 00:08:24.669 7309.785 - 7360.197: 86.4824% ( 147) 00:08:24.669 7360.197 - 7410.609: 86.8939% ( 74) 00:08:24.669 7410.609 - 7461.022: 87.4666% ( 103) 00:08:24.669 7461.022 - 7511.434: 88.1395% ( 121) 00:08:24.669 7511.434 - 7561.846: 88.5454% ( 73) 00:08:24.669 7561.846 - 7612.258: 88.9513% ( 73) 00:08:24.669 7612.258 - 7662.671: 89.1348% ( 33) 00:08:24.669 7662.671 - 7713.083: 89.3016% ( 30) 00:08:24.669 7713.083 - 7763.495: 89.5574% ( 46) 00:08:24.669 7763.495 - 7813.908: 89.7687% ( 38) 00:08:24.669 7813.908 - 7864.320: 89.9800% ( 38) 00:08:24.669 7864.320 - 7914.732: 90.4193% ( 79) 00:08:24.669 7914.732 - 7965.145: 90.6361% ( 39) 00:08:24.669 7965.145 - 8015.557: 90.9030% ( 48) 00:08:24.669 8015.557 - 8065.969: 91.0476% ( 26) 00:08:24.669 8065.969 - 8116.382: 91.1977% ( 27) 00:08:24.669 8116.382 - 8166.794: 91.4480% ( 45) 00:08:24.669 8166.794 - 8217.206: 91.5536% ( 19) 00:08:24.669 8217.206 - 8267.618: 91.7149% ( 29) 00:08:24.669 8267.618 - 8318.031: 91.9317% ( 39) 00:08:24.669 8318.031 - 8368.443: 92.1263% ( 35) 00:08:24.669 8368.443 - 8418.855: 92.2320% ( 19) 00:08:24.669 8418.855 - 8469.268: 92.3543% ( 22) 00:08:24.669 8469.268 - 8519.680: 92.4933% ( 25) 00:08:24.669 8519.680 - 8570.092: 92.5823% ( 16) 00:08:24.669 8570.092 - 8620.505: 92.7046% ( 22) 00:08:24.669 8620.505 - 8670.917: 92.7602% ( 10) 00:08:24.669 8670.917 - 8721.329: 92.8325% ( 13) 00:08:24.669 8721.329 - 8771.742: 92.9437% ( 20) 00:08:24.669 8771.742 - 8822.154: 93.0327% ( 16) 00:08:24.669 8822.154 - 8872.566: 93.1383% ( 19) 00:08:24.669 8872.566 - 8922.978: 93.3830% ( 44) 00:08:24.669 8922.978 - 8973.391: 93.6277% ( 44) 00:08:24.669 8973.391 - 9023.803: 93.8167% ( 34) 00:08:24.669 9023.803 - 9074.215: 93.9613% ( 26) 00:08:24.669 9074.215 - 9124.628: 94.3005% ( 61) 00:08:24.669 9124.628 - 9175.040: 94.6063% ( 55) 00:08:24.669 9175.040 - 9225.452: 94.7231% ( 21) 00:08:24.669 9225.452 - 9275.865: 94.8399% ( 21) 00:08:24.669 9275.865 - 9326.277: 94.9121% ( 13) 00:08:24.669 9326.277 - 9376.689: 95.0011% ( 16) 00:08:24.669 9376.689 - 9427.102: 95.0512% ( 9) 00:08:24.669 9427.102 - 9477.514: 95.0901% ( 7) 00:08:24.669 9477.514 - 9527.926: 95.1346% ( 8) 00:08:24.669 9527.926 - 9578.338: 95.1735% ( 7) 00:08:24.669 9578.338 - 9628.751: 95.2180% ( 8) 00:08:24.669 9628.751 - 9679.163: 95.2847% ( 12) 00:08:24.669 9679.163 - 9729.575: 95.3514% ( 12) 00:08:24.669 9729.575 - 9779.988: 95.4738% ( 22) 00:08:24.669 9779.988 - 9830.400: 95.6239% ( 27) 00:08:24.669 9830.400 - 9880.812: 95.6851% ( 11) 00:08:24.669 9880.812 - 9931.225: 95.7685% ( 15) 00:08:24.669 9931.225 - 9981.637: 95.8407% ( 13) 00:08:24.669 9981.637 - 10032.049: 95.8964% ( 10) 00:08:24.669 10032.049 - 10082.462: 95.9631% ( 12) 00:08:24.669 10082.462 - 10132.874: 96.0354% ( 13) 00:08:24.669 10132.874 - 10183.286: 96.1021% ( 12) 00:08:24.669 10183.286 - 10233.698: 96.1466% ( 8) 00:08:24.669 10233.698 - 10284.111: 96.1688% ( 4) 00:08:24.669 10284.111 - 10334.523: 96.1744% ( 1) 00:08:24.669 10334.523 - 10384.935: 96.1855% ( 2) 00:08:24.669 10384.935 - 10435.348: 96.2133% ( 5) 00:08:24.669 10435.348 - 10485.760: 96.2355% ( 4) 00:08:24.669 10485.760 - 10536.172: 96.2745% ( 7) 00:08:24.669 10536.172 - 10586.585: 96.3023% ( 5) 00:08:24.669 10586.585 - 10636.997: 96.3190% ( 3) 00:08:24.669 10636.997 - 10687.409: 96.3468% ( 5) 00:08:24.669 10687.409 - 10737.822: 96.4079% ( 11) 00:08:24.669 10737.822 - 10788.234: 96.4468% ( 7) 00:08:24.669 10788.234 - 10838.646: 96.4969% ( 9) 00:08:24.669 10838.646 - 10889.058: 96.5747% ( 14) 00:08:24.669 10889.058 - 10939.471: 96.6415% ( 12) 00:08:24.669 10939.471 - 10989.883: 96.6915% ( 9) 00:08:24.669 10989.883 - 11040.295: 96.7471% ( 10) 00:08:24.669 11040.295 - 11090.708: 96.7916% ( 8) 00:08:24.669 11090.708 - 11141.120: 96.8194% ( 5) 00:08:24.669 11141.120 - 11191.532: 96.8472% ( 5) 00:08:24.669 11191.532 - 11241.945: 96.9250% ( 14) 00:08:24.669 11241.945 - 11292.357: 96.9751% ( 9) 00:08:24.669 11292.357 - 11342.769: 97.0029% ( 5) 00:08:24.670 11342.769 - 11393.182: 97.0474% ( 8) 00:08:24.670 11393.182 - 11443.594: 97.0863% ( 7) 00:08:24.670 11443.594 - 11494.006: 97.2587% ( 31) 00:08:24.670 11494.006 - 11544.418: 97.3643% ( 19) 00:08:24.670 11544.418 - 11594.831: 97.3866% ( 4) 00:08:24.670 11594.831 - 11645.243: 97.4144% ( 5) 00:08:24.670 11645.243 - 11695.655: 97.4422% ( 5) 00:08:24.670 11695.655 - 11746.068: 97.4477% ( 1) 00:08:24.670 11746.068 - 11796.480: 97.4589% ( 2) 00:08:24.670 11796.480 - 11846.892: 97.4700% ( 2) 00:08:24.670 11846.892 - 11897.305: 97.4755% ( 1) 00:08:24.670 11897.305 - 11947.717: 97.4867% ( 2) 00:08:24.670 11947.717 - 11998.129: 97.4922% ( 1) 00:08:24.670 11998.129 - 12048.542: 97.5200% ( 5) 00:08:24.670 12048.542 - 12098.954: 97.5367% ( 3) 00:08:24.670 12098.954 - 12149.366: 97.5589% ( 4) 00:08:24.670 12149.366 - 12199.778: 97.5979% ( 7) 00:08:24.670 12199.778 - 12250.191: 97.6646% ( 12) 00:08:24.670 12250.191 - 12300.603: 97.6980% ( 6) 00:08:24.670 12300.603 - 12351.015: 97.7146% ( 3) 00:08:24.670 12351.015 - 12401.428: 97.7258% ( 2) 00:08:24.670 12401.428 - 12451.840: 97.7369% ( 2) 00:08:24.670 12451.840 - 12502.252: 97.7424% ( 1) 00:08:24.670 12502.252 - 12552.665: 97.7536% ( 2) 00:08:24.670 12552.665 - 12603.077: 97.7702% ( 3) 00:08:24.670 12603.077 - 12653.489: 97.8147% ( 8) 00:08:24.670 12653.489 - 12703.902: 98.0093% ( 35) 00:08:24.670 12703.902 - 12754.314: 98.0816% ( 13) 00:08:24.670 12754.314 - 12804.726: 98.1150% ( 6) 00:08:24.670 12804.726 - 12855.138: 98.1595% ( 8) 00:08:24.670 12855.138 - 12905.551: 98.1928% ( 6) 00:08:24.670 12905.551 - 13006.375: 98.2762% ( 15) 00:08:24.670 13006.375 - 13107.200: 98.3763% ( 18) 00:08:24.670 13107.200 - 13208.025: 98.6154% ( 43) 00:08:24.670 13208.025 - 13308.849: 98.6766% ( 11) 00:08:24.670 13308.849 - 13409.674: 98.7433% ( 12) 00:08:24.670 13409.674 - 13510.498: 98.7878% ( 8) 00:08:24.670 13510.498 - 13611.323: 98.8379% ( 9) 00:08:24.670 13611.323 - 13712.148: 98.8768% ( 7) 00:08:24.670 13712.148 - 13812.972: 98.9324% ( 10) 00:08:24.670 14115.446 - 14216.271: 98.9379% ( 1) 00:08:24.670 14317.095 - 14417.920: 98.9769% ( 7) 00:08:24.670 14417.920 - 14518.745: 99.0380% ( 11) 00:08:24.670 14518.745 - 14619.569: 99.2104% ( 31) 00:08:24.670 14619.569 - 14720.394: 99.2438% ( 6) 00:08:24.670 14720.394 - 14821.218: 99.2771% ( 6) 00:08:24.670 14821.218 - 14922.043: 99.2883% ( 2) 00:08:24.670 17341.834 - 17442.658: 99.3105% ( 4) 00:08:24.670 17442.658 - 17543.483: 99.3717% ( 11) 00:08:24.670 17543.483 - 17644.308: 99.4161% ( 8) 00:08:24.670 17644.308 - 17745.132: 99.4662% ( 9) 00:08:24.670 17745.132 - 17845.957: 99.4773% ( 2) 00:08:24.670 17946.782 - 18047.606: 99.5051% ( 5) 00:08:24.670 18047.606 - 18148.431: 99.5218% ( 3) 00:08:24.670 18148.431 - 18249.255: 99.5385% ( 3) 00:08:24.670 18249.255 - 18350.080: 99.5663% ( 5) 00:08:24.670 18350.080 - 18450.905: 99.5830% ( 3) 00:08:24.670 18450.905 - 18551.729: 99.6052% ( 4) 00:08:24.670 18551.729 - 18652.554: 99.6274% ( 4) 00:08:24.670 18652.554 - 18753.378: 99.6441% ( 3) 00:08:24.670 22080.591 - 22181.415: 99.6552% ( 2) 00:08:24.670 22181.415 - 22282.240: 99.6719% ( 3) 00:08:24.670 22282.240 - 22383.065: 99.6942% ( 4) 00:08:24.670 23088.837 - 23189.662: 99.7220% ( 5) 00:08:24.670 23189.662 - 23290.486: 99.7776% ( 10) 00:08:24.670 23290.486 - 23391.311: 99.8332% ( 10) 00:08:24.670 23391.311 - 23492.135: 99.8832% ( 9) 00:08:24.670 23492.135 - 23592.960: 99.9166% ( 6) 00:08:24.670 23592.960 - 23693.785: 99.9444% ( 5) 00:08:24.670 23693.785 - 23794.609: 99.9722% ( 5) 00:08:24.670 23794.609 - 23895.434: 99.9944% ( 4) 00:08:24.670 23895.434 - 23996.258: 100.0000% ( 1) 00:08:24.670 00:08:24.670 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:24.670 ============================================================================== 00:08:24.670 Range in us Cumulative IO count 00:08:24.670 5469.735 - 5494.942: 0.0056% ( 1) 00:08:24.670 5494.942 - 5520.148: 0.0111% ( 1) 00:08:24.670 5520.148 - 5545.354: 0.0167% ( 1) 00:08:24.670 5545.354 - 5570.560: 0.0222% ( 1) 00:08:24.670 5570.560 - 5595.766: 0.0278% ( 1) 00:08:24.670 5595.766 - 5620.972: 0.0334% ( 1) 00:08:24.670 5620.972 - 5646.178: 0.0445% ( 2) 00:08:24.670 5646.178 - 5671.385: 0.0500% ( 1) 00:08:24.670 5671.385 - 5696.591: 0.0556% ( 1) 00:08:24.670 5696.591 - 5721.797: 0.0723% ( 3) 00:08:24.670 5721.797 - 5747.003: 0.0834% ( 2) 00:08:24.670 5747.003 - 5772.209: 0.1112% ( 5) 00:08:24.670 5772.209 - 5797.415: 0.1390% ( 5) 00:08:24.670 5797.415 - 5822.622: 0.2169% ( 14) 00:08:24.670 5822.622 - 5847.828: 0.4671% ( 45) 00:08:24.670 5847.828 - 5873.034: 0.6228% ( 28) 00:08:24.670 5873.034 - 5898.240: 0.8174% ( 35) 00:08:24.670 5898.240 - 5923.446: 1.0343% ( 39) 00:08:24.670 5923.446 - 5948.652: 1.2567% ( 40) 00:08:24.670 5948.652 - 5973.858: 1.5013% ( 44) 00:08:24.670 5973.858 - 5999.065: 1.8350% ( 60) 00:08:24.670 5999.065 - 6024.271: 2.3465% ( 92) 00:08:24.670 6024.271 - 6049.477: 2.8859% ( 97) 00:08:24.670 6049.477 - 6074.683: 3.5142% ( 113) 00:08:24.670 6074.683 - 6099.889: 4.2149% ( 126) 00:08:24.670 6099.889 - 6125.095: 5.0879% ( 157) 00:08:24.670 6125.095 - 6150.302: 5.9553% ( 156) 00:08:24.670 6150.302 - 6175.508: 6.7894% ( 150) 00:08:24.670 6175.508 - 6200.714: 7.6902% ( 162) 00:08:24.670 6200.714 - 6225.920: 8.6799% ( 178) 00:08:24.670 6225.920 - 6251.126: 9.7809% ( 198) 00:08:24.670 6251.126 - 6276.332: 10.9653% ( 213) 00:08:24.670 6276.332 - 6301.538: 12.2387% ( 229) 00:08:24.670 6301.538 - 6326.745: 13.6065% ( 246) 00:08:24.670 6326.745 - 6351.951: 15.1079% ( 270) 00:08:24.670 6351.951 - 6377.157: 16.7705% ( 299) 00:08:24.670 6377.157 - 6402.363: 18.5165% ( 314) 00:08:24.670 6402.363 - 6427.569: 20.5627% ( 368) 00:08:24.670 6427.569 - 6452.775: 22.4310% ( 336) 00:08:24.670 6452.775 - 6503.188: 26.4402% ( 721) 00:08:24.670 6503.188 - 6553.600: 30.6217% ( 752) 00:08:24.670 6553.600 - 6604.012: 35.6261% ( 900) 00:08:24.670 6604.012 - 6654.425: 40.8141% ( 933) 00:08:24.670 6654.425 - 6704.837: 45.5683% ( 855) 00:08:24.670 6704.837 - 6755.249: 49.9500% ( 788) 00:08:24.670 6755.249 - 6805.662: 54.4262% ( 805) 00:08:24.670 6805.662 - 6856.074: 59.2694% ( 871) 00:08:24.670 6856.074 - 6906.486: 63.8178% ( 818) 00:08:24.670 6906.486 - 6956.898: 67.8659% ( 728) 00:08:24.670 6956.898 - 7007.311: 71.0632% ( 575) 00:08:24.670 7007.311 - 7057.723: 73.9546% ( 520) 00:08:24.670 7057.723 - 7108.135: 76.3623% ( 433) 00:08:24.670 7108.135 - 7158.548: 78.4197% ( 370) 00:08:24.670 7158.548 - 7208.960: 80.3270% ( 343) 00:08:24.670 7208.960 - 7259.372: 82.2175% ( 340) 00:08:24.670 7259.372 - 7309.785: 83.7911% ( 283) 00:08:24.671 7309.785 - 7360.197: 85.2313% ( 259) 00:08:24.671 7360.197 - 7410.609: 86.3490% ( 201) 00:08:24.671 7410.609 - 7461.022: 87.0218% ( 121) 00:08:24.671 7461.022 - 7511.434: 87.7280% ( 127) 00:08:24.671 7511.434 - 7561.846: 88.1839% ( 82) 00:08:24.671 7561.846 - 7612.258: 88.7678% ( 105) 00:08:24.671 7612.258 - 7662.671: 89.1737% ( 73) 00:08:24.671 7662.671 - 7713.083: 89.5574% ( 69) 00:08:24.671 7713.083 - 7763.495: 89.8187% ( 47) 00:08:24.671 7763.495 - 7813.908: 90.0968% ( 50) 00:08:24.671 7813.908 - 7864.320: 90.2636% ( 30) 00:08:24.671 7864.320 - 7914.732: 90.6139% ( 63) 00:08:24.671 7914.732 - 7965.145: 90.8752% ( 47) 00:08:24.671 7965.145 - 8015.557: 91.1755% ( 54) 00:08:24.671 8015.557 - 8065.969: 91.3979% ( 40) 00:08:24.671 8065.969 - 8116.382: 91.6092% ( 38) 00:08:24.671 8116.382 - 8166.794: 91.8706% ( 47) 00:08:24.671 8166.794 - 8217.206: 92.0540% ( 33) 00:08:24.671 8217.206 - 8267.618: 92.1819% ( 23) 00:08:24.671 8267.618 - 8318.031: 92.2653% ( 15) 00:08:24.671 8318.031 - 8368.443: 92.3543% ( 16) 00:08:24.671 8368.443 - 8418.855: 92.4989% ( 26) 00:08:24.671 8418.855 - 8469.268: 92.6601% ( 29) 00:08:24.671 8469.268 - 8519.680: 92.7825% ( 22) 00:08:24.671 8519.680 - 8570.092: 92.8992% ( 21) 00:08:24.671 8570.092 - 8620.505: 92.9715% ( 13) 00:08:24.671 8620.505 - 8670.917: 93.0994% ( 23) 00:08:24.671 8670.917 - 8721.329: 93.2718% ( 31) 00:08:24.671 8721.329 - 8771.742: 93.5109% ( 43) 00:08:24.671 8771.742 - 8822.154: 93.5943% ( 15) 00:08:24.671 8822.154 - 8872.566: 93.6555% ( 11) 00:08:24.671 8872.566 - 8922.978: 93.7222% ( 12) 00:08:24.671 8922.978 - 8973.391: 93.7834% ( 11) 00:08:24.671 8973.391 - 9023.803: 93.8501% ( 12) 00:08:24.671 9023.803 - 9074.215: 93.9113% ( 11) 00:08:24.671 9074.215 - 9124.628: 93.9780% ( 12) 00:08:24.671 9124.628 - 9175.040: 94.0336% ( 10) 00:08:24.671 9175.040 - 9225.452: 94.0781% ( 8) 00:08:24.671 9225.452 - 9275.865: 94.1392% ( 11) 00:08:24.671 9275.865 - 9326.277: 94.2449% ( 19) 00:08:24.671 9326.277 - 9376.689: 94.3227% ( 14) 00:08:24.671 9376.689 - 9427.102: 94.4173% ( 17) 00:08:24.671 9427.102 - 9477.514: 94.5563% ( 25) 00:08:24.671 9477.514 - 9527.926: 94.6286% ( 13) 00:08:24.671 9527.926 - 9578.338: 94.6953% ( 12) 00:08:24.671 9578.338 - 9628.751: 94.7787% ( 15) 00:08:24.671 9628.751 - 9679.163: 94.8899% ( 20) 00:08:24.671 9679.163 - 9729.575: 95.0178% ( 23) 00:08:24.671 9729.575 - 9779.988: 95.1679% ( 27) 00:08:24.671 9779.988 - 9830.400: 95.3681% ( 36) 00:08:24.671 9830.400 - 9880.812: 95.4626% ( 17) 00:08:24.671 9880.812 - 9931.225: 95.5627% ( 18) 00:08:24.671 9931.225 - 9981.637: 95.5961% ( 6) 00:08:24.671 9981.637 - 10032.049: 95.6294% ( 6) 00:08:24.671 10032.049 - 10082.462: 95.6628% ( 6) 00:08:24.671 10082.462 - 10132.874: 95.7017% ( 7) 00:08:24.671 10132.874 - 10183.286: 95.7240% ( 4) 00:08:24.671 10183.286 - 10233.698: 95.7740% ( 9) 00:08:24.671 10233.698 - 10284.111: 95.8241% ( 9) 00:08:24.671 10284.111 - 10334.523: 95.8630% ( 7) 00:08:24.671 10334.523 - 10384.935: 95.8964% ( 6) 00:08:24.671 10384.935 - 10435.348: 95.9353% ( 7) 00:08:24.671 10435.348 - 10485.760: 95.9742% ( 7) 00:08:24.671 10485.760 - 10536.172: 96.0020% ( 5) 00:08:24.671 10536.172 - 10586.585: 96.0520% ( 9) 00:08:24.671 10586.585 - 10636.997: 96.0854% ( 6) 00:08:24.671 10636.997 - 10687.409: 96.1410% ( 10) 00:08:24.671 10687.409 - 10737.822: 96.2355% ( 17) 00:08:24.671 10737.822 - 10788.234: 96.3579% ( 22) 00:08:24.671 10788.234 - 10838.646: 96.4135% ( 10) 00:08:24.671 10838.646 - 10889.058: 96.5024% ( 16) 00:08:24.671 10889.058 - 10939.471: 96.5469% ( 8) 00:08:24.671 10939.471 - 10989.883: 96.6081% ( 11) 00:08:24.671 10989.883 - 11040.295: 96.6915% ( 15) 00:08:24.671 11040.295 - 11090.708: 96.7582% ( 12) 00:08:24.671 11090.708 - 11141.120: 96.8305% ( 13) 00:08:24.671 11141.120 - 11191.532: 96.8861% ( 10) 00:08:24.671 11191.532 - 11241.945: 96.9640% ( 14) 00:08:24.671 11241.945 - 11292.357: 97.0196% ( 10) 00:08:24.671 11292.357 - 11342.769: 97.1141% ( 17) 00:08:24.671 11342.769 - 11393.182: 97.1586% ( 8) 00:08:24.671 11393.182 - 11443.594: 97.2142% ( 10) 00:08:24.671 11443.594 - 11494.006: 97.2809% ( 12) 00:08:24.671 11494.006 - 11544.418: 97.3254% ( 8) 00:08:24.671 11544.418 - 11594.831: 97.3754% ( 9) 00:08:24.671 11594.831 - 11645.243: 97.4255% ( 9) 00:08:24.671 11645.243 - 11695.655: 97.4922% ( 12) 00:08:24.671 11695.655 - 11746.068: 97.5367% ( 8) 00:08:24.671 11746.068 - 11796.480: 97.5812% ( 8) 00:08:24.671 11796.480 - 11846.892: 97.6368% ( 10) 00:08:24.671 11846.892 - 11897.305: 97.6980% ( 11) 00:08:24.671 11897.305 - 11947.717: 97.7313% ( 6) 00:08:24.671 11947.717 - 11998.129: 97.7980% ( 12) 00:08:24.671 11998.129 - 12048.542: 97.8370% ( 7) 00:08:24.671 12048.542 - 12098.954: 97.8815% ( 8) 00:08:24.671 12098.954 - 12149.366: 97.9371% ( 10) 00:08:24.671 12149.366 - 12199.778: 97.9815% ( 8) 00:08:24.671 12199.778 - 12250.191: 98.0260% ( 8) 00:08:24.671 12250.191 - 12300.603: 98.0594% ( 6) 00:08:24.671 12300.603 - 12351.015: 98.0705% ( 2) 00:08:24.671 12351.015 - 12401.428: 98.0983% ( 5) 00:08:24.671 12401.428 - 12451.840: 98.1150% ( 3) 00:08:24.671 12451.840 - 12502.252: 98.1261% ( 2) 00:08:24.671 12502.252 - 12552.665: 98.1484% ( 4) 00:08:24.671 12552.665 - 12603.077: 98.1539% ( 1) 00:08:24.671 12603.077 - 12653.489: 98.1595% ( 1) 00:08:24.671 12653.489 - 12703.902: 98.1650% ( 1) 00:08:24.671 12703.902 - 12754.314: 98.1706% ( 1) 00:08:24.671 12754.314 - 12804.726: 98.1873% ( 3) 00:08:24.671 12804.726 - 12855.138: 98.2151% ( 5) 00:08:24.671 12855.138 - 12905.551: 98.2206% ( 1) 00:08:24.671 12905.551 - 13006.375: 98.2596% ( 7) 00:08:24.671 13006.375 - 13107.200: 98.3652% ( 19) 00:08:24.671 13107.200 - 13208.025: 98.5209% ( 28) 00:08:24.671 13208.025 - 13308.849: 98.5376% ( 3) 00:08:24.671 13308.849 - 13409.674: 98.5710% ( 6) 00:08:24.671 13409.674 - 13510.498: 98.6154% ( 8) 00:08:24.671 13510.498 - 13611.323: 98.6544% ( 7) 00:08:24.671 13611.323 - 13712.148: 98.6988% ( 8) 00:08:24.671 13712.148 - 13812.972: 98.7711% ( 13) 00:08:24.671 13812.972 - 13913.797: 98.8823% ( 20) 00:08:24.671 13913.797 - 14014.622: 98.9324% ( 9) 00:08:24.671 14115.446 - 14216.271: 98.9379% ( 1) 00:08:24.671 14317.095 - 14417.920: 98.9546% ( 3) 00:08:24.671 14417.920 - 14518.745: 99.1270% ( 31) 00:08:24.671 14518.745 - 14619.569: 99.1993% ( 13) 00:08:24.671 14619.569 - 14720.394: 99.2160% ( 3) 00:08:24.671 14720.394 - 14821.218: 99.2549% ( 7) 00:08:24.671 14821.218 - 14922.043: 99.2660% ( 2) 00:08:24.671 14922.043 - 15022.868: 99.2883% ( 4) 00:08:24.671 17442.658 - 17543.483: 99.3105% ( 4) 00:08:24.671 17543.483 - 17644.308: 99.3383% ( 5) 00:08:24.671 17644.308 - 17745.132: 99.3661% ( 5) 00:08:24.671 17745.132 - 17845.957: 99.3939% ( 5) 00:08:24.671 17845.957 - 17946.782: 99.4161% ( 4) 00:08:24.671 17946.782 - 18047.606: 99.4440% ( 5) 00:08:24.671 18047.606 - 18148.431: 99.4662% ( 4) 00:08:24.672 18148.431 - 18249.255: 99.4996% ( 6) 00:08:24.672 18249.255 - 18350.080: 99.5218% ( 4) 00:08:24.672 18350.080 - 18450.905: 99.5496% ( 5) 00:08:24.672 18450.905 - 18551.729: 99.5718% ( 4) 00:08:24.672 18551.729 - 18652.554: 99.5996% ( 5) 00:08:24.672 18652.554 - 18753.378: 99.6274% ( 5) 00:08:24.672 18753.378 - 18854.203: 99.6441% ( 3) 00:08:24.672 21878.942 - 21979.766: 99.6497% ( 1) 00:08:24.672 21979.766 - 22080.591: 99.6775% ( 5) 00:08:24.672 22080.591 - 22181.415: 99.6997% ( 4) 00:08:24.672 22181.415 - 22282.240: 99.7275% ( 5) 00:08:24.672 22282.240 - 22383.065: 99.7553% ( 5) 00:08:24.672 22383.065 - 22483.889: 99.7831% ( 5) 00:08:24.672 22483.889 - 22584.714: 99.8054% ( 4) 00:08:24.672 22584.714 - 22685.538: 99.8332% ( 5) 00:08:24.672 22685.538 - 22786.363: 99.8610% ( 5) 00:08:24.672 22786.363 - 22887.188: 99.8888% ( 5) 00:08:24.672 22887.188 - 22988.012: 99.9166% ( 5) 00:08:24.672 22988.012 - 23088.837: 99.9444% ( 5) 00:08:24.672 23088.837 - 23189.662: 99.9722% ( 5) 00:08:24.672 23189.662 - 23290.486: 99.9944% ( 4) 00:08:24.672 23290.486 - 23391.311: 100.0000% ( 1) 00:08:24.672 00:08:24.672 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:24.672 ============================================================================== 00:08:24.672 Range in us Cumulative IO count 00:08:24.672 5016.025 - 5041.231: 0.0056% ( 1) 00:08:24.672 5041.231 - 5066.437: 0.0167% ( 2) 00:08:24.672 5066.437 - 5091.643: 0.0222% ( 1) 00:08:24.672 5091.643 - 5116.849: 0.0334% ( 2) 00:08:24.672 5116.849 - 5142.055: 0.0445% ( 2) 00:08:24.672 5142.055 - 5167.262: 0.0556% ( 2) 00:08:24.672 5167.262 - 5192.468: 0.0667% ( 2) 00:08:24.672 5192.468 - 5217.674: 0.1112% ( 8) 00:08:24.672 5217.674 - 5242.880: 0.1668% ( 10) 00:08:24.672 5242.880 - 5268.086: 0.2224% ( 10) 00:08:24.672 5268.086 - 5293.292: 0.2669% ( 8) 00:08:24.672 5293.292 - 5318.498: 0.2780% ( 2) 00:08:24.672 5318.498 - 5343.705: 0.2836% ( 1) 00:08:24.672 5343.705 - 5368.911: 0.2947% ( 2) 00:08:24.672 5368.911 - 5394.117: 0.3003% ( 1) 00:08:24.672 5394.117 - 5419.323: 0.3114% ( 2) 00:08:24.672 5419.323 - 5444.529: 0.3169% ( 1) 00:08:24.672 5444.529 - 5469.735: 0.3225% ( 1) 00:08:24.672 5469.735 - 5494.942: 0.3281% ( 1) 00:08:24.672 5494.942 - 5520.148: 0.3336% ( 1) 00:08:24.672 5520.148 - 5545.354: 0.3392% ( 1) 00:08:24.672 5570.560 - 5595.766: 0.3448% ( 1) 00:08:24.672 5595.766 - 5620.972: 0.3503% ( 1) 00:08:24.672 5646.178 - 5671.385: 0.3559% ( 1) 00:08:24.672 5797.415 - 5822.622: 0.3670% ( 2) 00:08:24.672 5847.828 - 5873.034: 0.3726% ( 1) 00:08:24.672 5898.240 - 5923.446: 0.3892% ( 3) 00:08:24.672 5923.446 - 5948.652: 0.3948% ( 1) 00:08:24.672 5948.652 - 5973.858: 0.4282% ( 6) 00:08:24.672 5973.858 - 5999.065: 0.4615% ( 6) 00:08:24.672 5999.065 - 6024.271: 0.4949% ( 6) 00:08:24.672 6024.271 - 6049.477: 0.5394% ( 8) 00:08:24.672 6049.477 - 6074.683: 0.6117% ( 13) 00:08:24.672 6074.683 - 6099.889: 0.7673% ( 28) 00:08:24.672 6099.889 - 6125.095: 1.0231% ( 46) 00:08:24.672 6125.095 - 6150.302: 1.5903% ( 102) 00:08:24.672 6150.302 - 6175.508: 2.6690% ( 194) 00:08:24.672 6175.508 - 6200.714: 4.1148% ( 260) 00:08:24.672 6200.714 - 6225.920: 5.6773% ( 281) 00:08:24.672 6225.920 - 6251.126: 7.3009% ( 292) 00:08:24.672 6251.126 - 6276.332: 8.5298% ( 221) 00:08:24.672 6276.332 - 6301.538: 11.3156% ( 501) 00:08:24.672 6301.538 - 6326.745: 12.9838% ( 300) 00:08:24.672 6326.745 - 6351.951: 14.7576% ( 319) 00:08:24.672 6351.951 - 6377.157: 16.8539% ( 377) 00:08:24.672 6377.157 - 6402.363: 18.2662% ( 254) 00:08:24.672 6402.363 - 6427.569: 19.9177% ( 297) 00:08:24.672 6427.569 - 6452.775: 21.4246% ( 271) 00:08:24.672 6452.775 - 6503.188: 25.2947% ( 696) 00:08:24.672 6503.188 - 6553.600: 28.3585% ( 551) 00:08:24.672 6553.600 - 6604.012: 32.0674% ( 667) 00:08:24.672 6604.012 - 6654.425: 36.5325% ( 803) 00:08:24.672 6654.425 - 6704.837: 42.3432% ( 1045) 00:08:24.672 6704.837 - 6755.249: 47.7480% ( 972) 00:08:24.672 6755.249 - 6805.662: 53.6922% ( 1069) 00:08:24.672 6805.662 - 6856.074: 59.4195% ( 1030) 00:08:24.672 6856.074 - 6906.486: 65.5138% ( 1096) 00:08:24.672 6906.486 - 6956.898: 71.3245% ( 1045) 00:08:24.672 6956.898 - 7007.311: 75.4282% ( 738) 00:08:24.672 7007.311 - 7057.723: 78.7700% ( 601) 00:08:24.672 7057.723 - 7108.135: 81.0721% ( 414) 00:08:24.672 7108.135 - 7158.548: 82.7680% ( 305) 00:08:24.672 7158.548 - 7208.960: 83.8078% ( 187) 00:08:24.672 7208.960 - 7259.372: 84.3972% ( 106) 00:08:24.672 7259.372 - 7309.785: 85.3703% ( 175) 00:08:24.672 7309.785 - 7360.197: 86.1099% ( 133) 00:08:24.672 7360.197 - 7410.609: 86.9217% ( 146) 00:08:24.672 7410.609 - 7461.022: 87.5667% ( 116) 00:08:24.672 7461.022 - 7511.434: 87.9226% ( 64) 00:08:24.672 7511.434 - 7561.846: 88.1839% ( 47) 00:08:24.672 7561.846 - 7612.258: 88.5065% ( 58) 00:08:24.672 7612.258 - 7662.671: 88.9847% ( 86) 00:08:24.672 7662.671 - 7713.083: 89.2516% ( 48) 00:08:24.672 7713.083 - 7763.495: 89.9411% ( 124) 00:08:24.672 7763.495 - 7813.908: 90.1468% ( 37) 00:08:24.672 7813.908 - 7864.320: 90.5249% ( 68) 00:08:24.672 7864.320 - 7914.732: 91.3812% ( 154) 00:08:24.672 7914.732 - 7965.145: 91.7593% ( 68) 00:08:24.672 7965.145 - 8015.557: 91.9206% ( 29) 00:08:24.672 8015.557 - 8065.969: 92.0540% ( 24) 00:08:24.672 8065.969 - 8116.382: 92.1708% ( 21) 00:08:24.672 8116.382 - 8166.794: 92.4655% ( 53) 00:08:24.672 8166.794 - 8217.206: 92.5823% ( 21) 00:08:24.672 8217.206 - 8267.618: 92.6490% ( 12) 00:08:24.672 8267.618 - 8318.031: 92.7213% ( 13) 00:08:24.672 8318.031 - 8368.443: 92.7658% ( 8) 00:08:24.672 8368.443 - 8418.855: 92.8603% ( 17) 00:08:24.672 8418.855 - 8469.268: 92.9270% ( 12) 00:08:24.672 8469.268 - 8519.680: 92.9882% ( 11) 00:08:24.672 8519.680 - 8570.092: 93.1217% ( 24) 00:08:24.672 8570.092 - 8620.505: 93.1383% ( 3) 00:08:24.672 8620.505 - 8670.917: 93.1717% ( 6) 00:08:24.672 8670.917 - 8721.329: 93.2329% ( 11) 00:08:24.672 8721.329 - 8771.742: 93.2774% ( 8) 00:08:24.672 8771.742 - 8822.154: 93.3385% ( 11) 00:08:24.672 8822.154 - 8872.566: 93.3997% ( 11) 00:08:24.672 8872.566 - 8922.978: 93.7166% ( 57) 00:08:24.672 8922.978 - 8973.391: 93.8668% ( 27) 00:08:24.672 8973.391 - 9023.803: 93.9557% ( 16) 00:08:24.672 9023.803 - 9074.215: 94.0336% ( 14) 00:08:24.672 9074.215 - 9124.628: 94.2115% ( 32) 00:08:24.672 9124.628 - 9175.040: 94.3227% ( 20) 00:08:24.672 9175.040 - 9225.452: 94.3839% ( 11) 00:08:24.672 9225.452 - 9275.865: 94.4506% ( 12) 00:08:24.672 9275.865 - 9326.277: 94.4951% ( 8) 00:08:24.672 9326.277 - 9376.689: 94.5452% ( 9) 00:08:24.673 9376.689 - 9427.102: 94.5674% ( 4) 00:08:24.673 9427.102 - 9477.514: 94.6063% ( 7) 00:08:24.673 9477.514 - 9527.926: 94.6341% ( 5) 00:08:24.673 9527.926 - 9578.338: 94.6508% ( 3) 00:08:24.673 9578.338 - 9628.751: 94.6953% ( 8) 00:08:24.673 9628.751 - 9679.163: 94.7453% ( 9) 00:08:24.673 9679.163 - 9729.575: 94.7898% ( 8) 00:08:24.673 9729.575 - 9779.988: 94.8732% ( 15) 00:08:24.673 9779.988 - 9830.400: 94.9844% ( 20) 00:08:24.673 9830.400 - 9880.812: 95.2903% ( 55) 00:08:24.673 9880.812 - 9931.225: 95.3236% ( 6) 00:08:24.673 9931.225 - 9981.637: 95.3681% ( 8) 00:08:24.673 9981.637 - 10032.049: 95.4237% ( 10) 00:08:24.673 10032.049 - 10082.462: 95.5294% ( 19) 00:08:24.673 10082.462 - 10132.874: 95.8519% ( 58) 00:08:24.673 10132.874 - 10183.286: 95.9964% ( 26) 00:08:24.673 10183.286 - 10233.698: 96.1132% ( 21) 00:08:24.673 10233.698 - 10284.111: 96.1466% ( 6) 00:08:24.673 10284.111 - 10334.523: 96.1688% ( 4) 00:08:24.673 10334.523 - 10384.935: 96.1966% ( 5) 00:08:24.673 10384.935 - 10435.348: 96.2189% ( 4) 00:08:24.673 10435.348 - 10485.760: 96.2411% ( 4) 00:08:24.673 10485.760 - 10536.172: 96.2578% ( 3) 00:08:24.673 10536.172 - 10586.585: 96.2745% ( 3) 00:08:24.673 10586.585 - 10636.997: 96.2967% ( 4) 00:08:24.673 10636.997 - 10687.409: 96.3134% ( 3) 00:08:24.673 10687.409 - 10737.822: 96.3356% ( 4) 00:08:24.673 10737.822 - 10788.234: 96.3412% ( 1) 00:08:24.673 10788.234 - 10838.646: 96.3523% ( 2) 00:08:24.673 10838.646 - 10889.058: 96.3634% ( 2) 00:08:24.673 10889.058 - 10939.471: 96.3690% ( 1) 00:08:24.673 10939.471 - 10989.883: 96.4746% ( 19) 00:08:24.673 10989.883 - 11040.295: 96.5469% ( 13) 00:08:24.673 11040.295 - 11090.708: 96.6248% ( 14) 00:08:24.673 11090.708 - 11141.120: 96.7360% ( 20) 00:08:24.673 11141.120 - 11191.532: 96.7749% ( 7) 00:08:24.673 11191.532 - 11241.945: 96.8194% ( 8) 00:08:24.673 11241.945 - 11292.357: 96.8639% ( 8) 00:08:24.673 11292.357 - 11342.769: 96.9362% ( 13) 00:08:24.673 11342.769 - 11393.182: 96.9806% ( 8) 00:08:24.673 11393.182 - 11443.594: 97.0307% ( 9) 00:08:24.673 11443.594 - 11494.006: 97.0863% ( 10) 00:08:24.673 11494.006 - 11544.418: 97.1530% ( 12) 00:08:24.673 11544.418 - 11594.831: 97.2364% ( 15) 00:08:24.673 11594.831 - 11645.243: 97.3087% ( 13) 00:08:24.673 11645.243 - 11695.655: 97.3977% ( 16) 00:08:24.673 11695.655 - 11746.068: 97.4644% ( 12) 00:08:24.673 11746.068 - 11796.480: 97.5145% ( 9) 00:08:24.673 11796.480 - 11846.892: 97.5701% ( 10) 00:08:24.673 11846.892 - 11897.305: 97.6201% ( 9) 00:08:24.673 11897.305 - 11947.717: 97.6590% ( 7) 00:08:24.673 11947.717 - 11998.129: 97.7146% ( 10) 00:08:24.673 11998.129 - 12048.542: 97.7536% ( 7) 00:08:24.673 12048.542 - 12098.954: 97.7980% ( 8) 00:08:24.673 12098.954 - 12149.366: 97.8425% ( 8) 00:08:24.673 12149.366 - 12199.778: 97.8759% ( 6) 00:08:24.673 12199.778 - 12250.191: 98.0260% ( 27) 00:08:24.673 12250.191 - 12300.603: 98.1762% ( 27) 00:08:24.673 12300.603 - 12351.015: 98.1928% ( 3) 00:08:24.673 12351.015 - 12401.428: 98.2040% ( 2) 00:08:24.673 12451.840 - 12502.252: 98.2095% ( 1) 00:08:24.673 12502.252 - 12552.665: 98.2206% ( 2) 00:08:24.673 12855.138 - 12905.551: 98.2262% ( 1) 00:08:24.673 12905.551 - 13006.375: 98.2596% ( 6) 00:08:24.673 13006.375 - 13107.200: 98.5153% ( 46) 00:08:24.673 13107.200 - 13208.025: 98.5487% ( 6) 00:08:24.673 13208.025 - 13308.849: 98.5765% ( 5) 00:08:24.673 13510.498 - 13611.323: 98.5932% ( 3) 00:08:24.673 13611.323 - 13712.148: 98.8823% ( 52) 00:08:24.673 13712.148 - 13812.972: 98.9324% ( 9) 00:08:24.673 14317.095 - 14417.920: 98.9491% ( 3) 00:08:24.673 14417.920 - 14518.745: 99.0158% ( 12) 00:08:24.673 14518.745 - 14619.569: 99.1937% ( 32) 00:08:24.673 14619.569 - 14720.394: 99.2271% ( 6) 00:08:24.673 14720.394 - 14821.218: 99.2549% ( 5) 00:08:24.673 14821.218 - 14922.043: 99.2883% ( 6) 00:08:24.673 17341.834 - 17442.658: 99.2938% ( 1) 00:08:24.673 17644.308 - 17745.132: 99.3272% ( 6) 00:08:24.673 17745.132 - 17845.957: 99.3828% ( 10) 00:08:24.673 17845.957 - 17946.782: 99.4328% ( 9) 00:08:24.673 17946.782 - 18047.606: 99.4829% ( 9) 00:08:24.673 18047.606 - 18148.431: 99.5107% ( 5) 00:08:24.673 18148.431 - 18249.255: 99.5385% ( 5) 00:08:24.673 18249.255 - 18350.080: 99.5607% ( 4) 00:08:24.673 18350.080 - 18450.905: 99.5885% ( 5) 00:08:24.673 18450.905 - 18551.729: 99.6163% ( 5) 00:08:24.673 18551.729 - 18652.554: 99.6441% ( 5) 00:08:24.673 20971.520 - 21072.345: 99.6552% ( 2) 00:08:24.673 21072.345 - 21173.169: 99.6831% ( 5) 00:08:24.673 21173.169 - 21273.994: 99.7609% ( 14) 00:08:24.673 21273.994 - 21374.818: 99.9500% ( 34) 00:08:24.673 21374.818 - 21475.643: 99.9722% ( 4) 00:08:24.673 21475.643 - 21576.468: 99.9778% ( 1) 00:08:24.673 21778.117 - 21878.942: 99.9833% ( 1) 00:08:24.673 21878.942 - 21979.766: 100.0000% ( 3) 00:08:24.673 00:08:24.673 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:24.673 ============================================================================== 00:08:24.673 Range in us Cumulative IO count 00:08:24.673 4411.077 - 4436.283: 0.0111% ( 2) 00:08:24.673 4436.283 - 4461.489: 0.0167% ( 1) 00:08:24.673 4461.489 - 4486.695: 0.0278% ( 2) 00:08:24.673 4486.695 - 4511.902: 0.0389% ( 2) 00:08:24.673 4511.902 - 4537.108: 0.0500% ( 2) 00:08:24.673 4537.108 - 4562.314: 0.0667% ( 3) 00:08:24.673 4562.314 - 4587.520: 0.0945% ( 5) 00:08:24.673 4587.520 - 4612.726: 0.1168% ( 4) 00:08:24.673 4612.726 - 4637.932: 0.1946% ( 14) 00:08:24.673 4637.932 - 4663.138: 0.2502% ( 10) 00:08:24.673 4663.138 - 4688.345: 0.2725% ( 4) 00:08:24.673 4688.345 - 4713.551: 0.2891% ( 3) 00:08:24.673 4713.551 - 4738.757: 0.3058% ( 3) 00:08:24.673 4738.757 - 4763.963: 0.3114% ( 1) 00:08:24.673 4763.963 - 4789.169: 0.3225% ( 2) 00:08:24.673 4789.169 - 4814.375: 0.3281% ( 1) 00:08:24.673 4814.375 - 4839.582: 0.3392% ( 2) 00:08:24.673 4839.582 - 4864.788: 0.3503% ( 2) 00:08:24.673 4864.788 - 4889.994: 0.3559% ( 1) 00:08:24.673 5696.591 - 5721.797: 0.3614% ( 1) 00:08:24.673 5747.003 - 5772.209: 0.3670% ( 1) 00:08:24.673 5772.209 - 5797.415: 0.3726% ( 1) 00:08:24.673 5797.415 - 5822.622: 0.3892% ( 3) 00:08:24.673 5847.828 - 5873.034: 0.4115% ( 4) 00:08:24.673 5873.034 - 5898.240: 0.4282% ( 3) 00:08:24.673 5898.240 - 5923.446: 0.4448% ( 3) 00:08:24.673 5923.446 - 5948.652: 0.4615% ( 3) 00:08:24.673 5948.652 - 5973.858: 0.4949% ( 6) 00:08:24.673 5973.858 - 5999.065: 0.5950% ( 18) 00:08:24.673 5999.065 - 6024.271: 0.7284% ( 24) 00:08:24.673 6024.271 - 6049.477: 0.9286% ( 36) 00:08:24.673 6049.477 - 6074.683: 1.2456% ( 57) 00:08:24.673 6074.683 - 6099.889: 1.6237% ( 68) 00:08:24.673 6099.889 - 6125.095: 2.1575% ( 96) 00:08:24.673 6125.095 - 6150.302: 3.0027% ( 152) 00:08:24.673 6150.302 - 6175.508: 4.3817% ( 248) 00:08:24.673 6175.508 - 6200.714: 5.5494% ( 210) 00:08:24.673 6200.714 - 6225.920: 7.0452% ( 269) 00:08:24.673 6225.920 - 6251.126: 8.6799% ( 294) 00:08:24.673 6251.126 - 6276.332: 10.2313% ( 279) 00:08:24.673 6276.332 - 6301.538: 11.2544% ( 184) 00:08:24.673 6301.538 - 6326.745: 12.8225% ( 282) 00:08:24.673 6326.745 - 6351.951: 14.2238% ( 252) 00:08:24.673 6351.951 - 6377.157: 15.6917% ( 264) 00:08:24.674 6377.157 - 6402.363: 17.4044% ( 308) 00:08:24.674 6402.363 - 6427.569: 19.2782% ( 337) 00:08:24.674 6427.569 - 6452.775: 21.1188% ( 331) 00:08:24.674 6452.775 - 6503.188: 25.7284% ( 829) 00:08:24.674 6503.188 - 6553.600: 30.4048% ( 841) 00:08:24.674 6553.600 - 6604.012: 34.6808% ( 769) 00:08:24.674 6604.012 - 6654.425: 39.2571% ( 823) 00:08:24.674 6654.425 - 6704.837: 44.2671% ( 901) 00:08:24.674 6704.837 - 6755.249: 48.8323% ( 821) 00:08:24.674 6755.249 - 6805.662: 53.4920% ( 838) 00:08:24.674 6805.662 - 6856.074: 59.0914% ( 1007) 00:08:24.674 6856.074 - 6906.486: 64.0347% ( 889) 00:08:24.674 6906.486 - 6956.898: 68.7166% ( 842) 00:08:24.674 6956.898 - 7007.311: 72.6312% ( 704) 00:08:24.674 7007.311 - 7057.723: 76.2233% ( 646) 00:08:24.674 7057.723 - 7108.135: 78.9090% ( 483) 00:08:24.674 7108.135 - 7158.548: 81.5725% ( 479) 00:08:24.674 7158.548 - 7208.960: 83.3018% ( 311) 00:08:24.674 7208.960 - 7259.372: 84.6141% ( 236) 00:08:24.674 7259.372 - 7309.785: 85.3870% ( 139) 00:08:24.674 7309.785 - 7360.197: 86.1154% ( 131) 00:08:24.674 7360.197 - 7410.609: 86.7549% ( 115) 00:08:24.674 7410.609 - 7461.022: 87.4166% ( 119) 00:08:24.674 7461.022 - 7511.434: 88.0394% ( 112) 00:08:24.674 7511.434 - 7561.846: 88.5954% ( 100) 00:08:24.674 7561.846 - 7612.258: 89.0013% ( 73) 00:08:24.674 7612.258 - 7662.671: 89.3572% ( 64) 00:08:24.674 7662.671 - 7713.083: 89.7075% ( 63) 00:08:24.674 7713.083 - 7763.495: 90.0189% ( 56) 00:08:24.674 7763.495 - 7813.908: 90.3081% ( 52) 00:08:24.674 7813.908 - 7864.320: 90.6361% ( 59) 00:08:24.674 7864.320 - 7914.732: 90.9030% ( 48) 00:08:24.674 7914.732 - 7965.145: 91.2978% ( 71) 00:08:24.674 7965.145 - 8015.557: 91.5647% ( 48) 00:08:24.674 8015.557 - 8065.969: 91.8817% ( 57) 00:08:24.674 8065.969 - 8116.382: 92.4544% ( 103) 00:08:24.674 8116.382 - 8166.794: 92.5990% ( 26) 00:08:24.674 8166.794 - 8217.206: 92.7157% ( 21) 00:08:24.674 8217.206 - 8267.618: 92.7992% ( 15) 00:08:24.674 8267.618 - 8318.031: 92.8714% ( 13) 00:08:24.674 8318.031 - 8368.443: 92.9382% ( 12) 00:08:24.674 8368.443 - 8418.855: 92.9827% ( 8) 00:08:24.674 8418.855 - 8469.268: 93.0160% ( 6) 00:08:24.674 8469.268 - 8519.680: 93.0716% ( 10) 00:08:24.674 8519.680 - 8570.092: 93.1105% ( 7) 00:08:24.674 8570.092 - 8620.505: 93.1217% ( 2) 00:08:24.674 8620.505 - 8670.917: 93.1661% ( 8) 00:08:24.674 8670.917 - 8721.329: 93.2329% ( 12) 00:08:24.674 8721.329 - 8771.742: 93.3107% ( 14) 00:08:24.674 8771.742 - 8822.154: 93.3830% ( 13) 00:08:24.674 8822.154 - 8872.566: 93.7055% ( 58) 00:08:24.674 8872.566 - 8922.978: 93.8835% ( 32) 00:08:24.674 8922.978 - 8973.391: 93.9613% ( 14) 00:08:24.674 8973.391 - 9023.803: 94.1114% ( 27) 00:08:24.674 9023.803 - 9074.215: 94.1837% ( 13) 00:08:24.674 9074.215 - 9124.628: 94.2226% ( 7) 00:08:24.674 9124.628 - 9175.040: 94.2560% ( 6) 00:08:24.674 9175.040 - 9225.452: 94.2949% ( 7) 00:08:24.674 9225.452 - 9275.865: 94.3116% ( 3) 00:08:24.674 9275.865 - 9326.277: 94.3172% ( 1) 00:08:24.674 9326.277 - 9376.689: 94.3561% ( 7) 00:08:24.674 9376.689 - 9427.102: 94.4673% ( 20) 00:08:24.674 9427.102 - 9477.514: 94.6564% ( 34) 00:08:24.674 9477.514 - 9527.926: 94.6897% ( 6) 00:08:24.674 9527.926 - 9578.338: 94.7342% ( 8) 00:08:24.674 9578.338 - 9628.751: 94.7954% ( 11) 00:08:24.674 9628.751 - 9679.163: 94.8677% ( 13) 00:08:24.674 9679.163 - 9729.575: 94.9288% ( 11) 00:08:24.674 9729.575 - 9779.988: 94.9622% ( 6) 00:08:24.674 9779.988 - 9830.400: 95.0011% ( 7) 00:08:24.674 9830.400 - 9880.812: 95.1068% ( 19) 00:08:24.674 9880.812 - 9931.225: 95.1790% ( 13) 00:08:24.674 9931.225 - 9981.637: 95.2513% ( 13) 00:08:24.674 9981.637 - 10032.049: 95.3459% ( 17) 00:08:24.674 10032.049 - 10082.462: 95.4126% ( 12) 00:08:24.674 10082.462 - 10132.874: 95.4626% ( 9) 00:08:24.674 10132.874 - 10183.286: 95.5294% ( 12) 00:08:24.674 10183.286 - 10233.698: 95.6072% ( 14) 00:08:24.674 10233.698 - 10284.111: 95.6795% ( 13) 00:08:24.674 10284.111 - 10334.523: 95.7573% ( 14) 00:08:24.674 10334.523 - 10384.935: 95.8463% ( 16) 00:08:24.674 10384.935 - 10435.348: 95.9464% ( 18) 00:08:24.674 10435.348 - 10485.760: 96.0576% ( 20) 00:08:24.674 10485.760 - 10536.172: 96.1855% ( 23) 00:08:24.674 10536.172 - 10586.585: 96.3301% ( 26) 00:08:24.674 10586.585 - 10636.997: 96.5191% ( 34) 00:08:24.674 10636.997 - 10687.409: 96.6915% ( 31) 00:08:24.674 10687.409 - 10737.822: 96.7638% ( 13) 00:08:24.674 10737.822 - 10788.234: 96.8416% ( 14) 00:08:24.674 10788.234 - 10838.646: 96.9195% ( 14) 00:08:24.674 10838.646 - 10889.058: 96.9584% ( 7) 00:08:24.674 10889.058 - 10939.471: 97.0085% ( 9) 00:08:24.674 10939.471 - 10989.883: 97.0307% ( 4) 00:08:24.674 10989.883 - 11040.295: 97.0585% ( 5) 00:08:24.674 11040.295 - 11090.708: 97.0863% ( 5) 00:08:24.674 11090.708 - 11141.120: 97.1141% ( 5) 00:08:24.674 11141.120 - 11191.532: 97.1252% ( 2) 00:08:24.674 11191.532 - 11241.945: 97.1363% ( 2) 00:08:24.674 11241.945 - 11292.357: 97.1419% ( 1) 00:08:24.674 11292.357 - 11342.769: 97.1530% ( 2) 00:08:24.674 11796.480 - 11846.892: 97.1641% ( 2) 00:08:24.674 11846.892 - 11897.305: 97.1864% ( 4) 00:08:24.674 11897.305 - 11947.717: 97.2086% ( 4) 00:08:24.674 11947.717 - 11998.129: 97.2309% ( 4) 00:08:24.674 11998.129 - 12048.542: 97.2587% ( 5) 00:08:24.674 12048.542 - 12098.954: 97.2920% ( 6) 00:08:24.674 12098.954 - 12149.366: 97.3810% ( 16) 00:08:24.674 12149.366 - 12199.778: 97.4366% ( 10) 00:08:24.674 12199.778 - 12250.191: 97.4755% ( 7) 00:08:24.674 12250.191 - 12300.603: 97.5089% ( 6) 00:08:24.674 12300.603 - 12351.015: 97.5367% ( 5) 00:08:24.674 12351.015 - 12401.428: 97.5812% ( 8) 00:08:24.674 12401.428 - 12451.840: 97.6201% ( 7) 00:08:24.674 12451.840 - 12502.252: 97.6479% ( 5) 00:08:24.674 12502.252 - 12552.665: 97.6980% ( 9) 00:08:24.675 12552.665 - 12603.077: 97.7758% ( 14) 00:08:24.675 12603.077 - 12653.489: 97.8648% ( 16) 00:08:24.675 12653.489 - 12703.902: 98.0371% ( 31) 00:08:24.675 12703.902 - 12754.314: 98.1484% ( 20) 00:08:24.675 12754.314 - 12804.726: 98.2540% ( 19) 00:08:24.675 12804.726 - 12855.138: 98.3430% ( 16) 00:08:24.675 12855.138 - 12905.551: 98.4264% ( 15) 00:08:24.675 12905.551 - 13006.375: 98.5153% ( 16) 00:08:24.675 13006.375 - 13107.200: 98.5543% ( 7) 00:08:24.675 13107.200 - 13208.025: 98.5765% ( 4) 00:08:24.675 13510.498 - 13611.323: 98.6154% ( 7) 00:08:24.675 13611.323 - 13712.148: 98.7378% ( 22) 00:08:24.675 13712.148 - 13812.972: 98.7767% ( 7) 00:08:24.675 13812.972 - 13913.797: 98.8545% ( 14) 00:08:24.675 13913.797 - 14014.622: 99.1214% ( 48) 00:08:24.675 14014.622 - 14115.446: 99.1659% ( 8) 00:08:24.675 14115.446 - 14216.271: 99.1937% ( 5) 00:08:24.675 14216.271 - 14317.095: 99.2104% ( 3) 00:08:24.675 14317.095 - 14417.920: 99.2327% ( 4) 00:08:24.675 14417.920 - 14518.745: 99.2438% ( 2) 00:08:24.675 14518.745 - 14619.569: 99.2605% ( 3) 00:08:24.675 14619.569 - 14720.394: 99.2771% ( 3) 00:08:24.675 14720.394 - 14821.218: 99.2883% ( 2) 00:08:24.675 16938.535 - 17039.360: 99.3105% ( 4) 00:08:24.675 17039.360 - 17140.185: 99.3550% ( 8) 00:08:24.675 17140.185 - 17241.009: 99.4050% ( 9) 00:08:24.675 17241.009 - 17341.834: 99.4495% ( 8) 00:08:24.675 17341.834 - 17442.658: 99.4884% ( 7) 00:08:24.675 17442.658 - 17543.483: 99.5107% ( 4) 00:08:24.675 17543.483 - 17644.308: 99.5385% ( 5) 00:08:24.675 17644.308 - 17745.132: 99.5663% ( 5) 00:08:24.675 17745.132 - 17845.957: 99.5941% ( 5) 00:08:24.675 17845.957 - 17946.782: 99.6219% ( 5) 00:08:24.675 17946.782 - 18047.606: 99.6441% ( 4) 00:08:24.675 20769.871 - 20870.695: 99.9277% ( 51) 00:08:24.675 20870.695 - 20971.520: 99.9611% ( 6) 00:08:24.675 21374.818 - 21475.643: 99.9666% ( 1) 00:08:24.675 21475.643 - 21576.468: 99.9889% ( 4) 00:08:24.675 21576.468 - 21677.292: 100.0000% ( 2) 00:08:24.675 00:08:24.675 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:24.675 ============================================================================== 00:08:24.675 Range in us Cumulative IO count 00:08:24.675 4007.778 - 4032.985: 0.0056% ( 1) 00:08:24.675 4184.222 - 4209.428: 0.0111% ( 1) 00:08:24.675 4209.428 - 4234.634: 0.0222% ( 2) 00:08:24.675 4234.634 - 4259.840: 0.0334% ( 2) 00:08:24.675 4259.840 - 4285.046: 0.0500% ( 3) 00:08:24.675 4285.046 - 4310.252: 0.0612% ( 2) 00:08:24.675 4310.252 - 4335.458: 0.0778% ( 3) 00:08:24.675 4335.458 - 4360.665: 0.1279% ( 9) 00:08:24.675 4360.665 - 4385.871: 0.2057% ( 14) 00:08:24.675 4385.871 - 4411.077: 0.2613% ( 10) 00:08:24.675 4411.077 - 4436.283: 0.2891% ( 5) 00:08:24.675 4436.283 - 4461.489: 0.3003% ( 2) 00:08:24.675 4461.489 - 4486.695: 0.3114% ( 2) 00:08:24.675 4486.695 - 4511.902: 0.3225% ( 2) 00:08:24.675 4511.902 - 4537.108: 0.3281% ( 1) 00:08:24.675 4537.108 - 4562.314: 0.3392% ( 2) 00:08:24.675 4562.314 - 4587.520: 0.3503% ( 2) 00:08:24.675 4587.520 - 4612.726: 0.3559% ( 1) 00:08:24.675 5696.591 - 5721.797: 0.3614% ( 1) 00:08:24.675 5747.003 - 5772.209: 0.3837% ( 4) 00:08:24.675 5772.209 - 5797.415: 0.3892% ( 1) 00:08:24.675 5797.415 - 5822.622: 0.4004% ( 2) 00:08:24.675 5822.622 - 5847.828: 0.4115% ( 2) 00:08:24.675 5847.828 - 5873.034: 0.4282% ( 3) 00:08:24.675 5873.034 - 5898.240: 0.4448% ( 3) 00:08:24.675 5898.240 - 5923.446: 0.4671% ( 4) 00:08:24.675 5923.446 - 5948.652: 0.5227% ( 10) 00:08:24.675 5948.652 - 5973.858: 0.5672% ( 8) 00:08:24.675 5973.858 - 5999.065: 0.6339% ( 12) 00:08:24.675 5999.065 - 6024.271: 0.7618% ( 23) 00:08:24.675 6024.271 - 6049.477: 0.9508% ( 34) 00:08:24.675 6049.477 - 6074.683: 1.1844% ( 42) 00:08:24.675 6074.683 - 6099.889: 1.5347% ( 63) 00:08:24.675 6099.889 - 6125.095: 2.0963% ( 101) 00:08:24.675 6125.095 - 6150.302: 3.0249% ( 167) 00:08:24.675 6150.302 - 6175.508: 4.2093% ( 213) 00:08:24.675 6175.508 - 6200.714: 5.1435% ( 168) 00:08:24.675 6200.714 - 6225.920: 6.5614% ( 255) 00:08:24.675 6225.920 - 6251.126: 7.8459% ( 231) 00:08:24.675 6251.126 - 6276.332: 9.3138% ( 264) 00:08:24.675 6276.332 - 6301.538: 11.0876% ( 319) 00:08:24.675 6301.538 - 6326.745: 12.6946% ( 289) 00:08:24.675 6326.745 - 6351.951: 14.1848% ( 268) 00:08:24.675 6351.951 - 6377.157: 15.6973% ( 272) 00:08:24.675 6377.157 - 6402.363: 17.1541% ( 262) 00:08:24.675 6402.363 - 6427.569: 18.9224% ( 318) 00:08:24.675 6427.569 - 6452.775: 20.7462% ( 328) 00:08:24.675 6452.775 - 6503.188: 24.0380% ( 592) 00:08:24.675 6503.188 - 6553.600: 28.1194% ( 734) 00:08:24.675 6553.600 - 6604.012: 32.7903% ( 840) 00:08:24.675 6604.012 - 6654.425: 37.9949% ( 936) 00:08:24.675 6654.425 - 6704.837: 44.0002% ( 1080) 00:08:24.675 6704.837 - 6755.249: 49.5385% ( 996) 00:08:24.675 6755.249 - 6805.662: 54.4484% ( 883) 00:08:24.675 6805.662 - 6856.074: 60.6261% ( 1111) 00:08:24.675 6856.074 - 6906.486: 65.9419% ( 956) 00:08:24.675 6906.486 - 6956.898: 70.4181% ( 805) 00:08:24.675 6956.898 - 7007.311: 74.2660% ( 692) 00:08:24.675 7007.311 - 7057.723: 77.2464% ( 536) 00:08:24.675 7057.723 - 7108.135: 79.7097% ( 443) 00:08:24.675 7108.135 - 7158.548: 81.5391% ( 329) 00:08:24.675 7158.548 - 7208.960: 82.8292% ( 232) 00:08:24.675 7208.960 - 7259.372: 84.1915% ( 245) 00:08:24.675 7259.372 - 7309.785: 85.2258% ( 186) 00:08:24.675 7309.785 - 7360.197: 86.0098% ( 141) 00:08:24.675 7360.197 - 7410.609: 86.6437% ( 114) 00:08:24.675 7410.609 - 7461.022: 87.2498% ( 109) 00:08:24.675 7461.022 - 7511.434: 87.7113% ( 83) 00:08:24.675 7511.434 - 7561.846: 88.3508% ( 115) 00:08:24.675 7561.846 - 7612.258: 88.7734% ( 76) 00:08:24.675 7612.258 - 7662.671: 88.9847% ( 38) 00:08:24.675 7662.671 - 7713.083: 89.1904% ( 37) 00:08:24.675 7713.083 - 7763.495: 89.4017% ( 38) 00:08:24.675 7763.495 - 7813.908: 89.5463% ( 26) 00:08:24.675 7813.908 - 7864.320: 89.7520% ( 37) 00:08:24.675 7864.320 - 7914.732: 90.1524% ( 72) 00:08:24.675 7914.732 - 7965.145: 90.3525% ( 36) 00:08:24.675 7965.145 - 8015.557: 90.5805% ( 41) 00:08:24.675 8015.557 - 8065.969: 91.0309% ( 81) 00:08:24.675 8065.969 - 8116.382: 91.4702% ( 79) 00:08:24.675 8116.382 - 8166.794: 92.0096% ( 97) 00:08:24.675 8166.794 - 8217.206: 92.3432% ( 60) 00:08:24.675 8217.206 - 8267.618: 92.5712% ( 41) 00:08:24.675 8267.618 - 8318.031: 92.7102% ( 25) 00:08:24.675 8318.031 - 8368.443: 92.7769% ( 12) 00:08:24.675 8368.443 - 8418.855: 92.8381% ( 11) 00:08:24.675 8418.855 - 8469.268: 92.8937% ( 10) 00:08:24.675 8469.268 - 8519.680: 92.9382% ( 8) 00:08:24.675 8519.680 - 8570.092: 92.9882% ( 9) 00:08:24.675 8570.092 - 8620.505: 93.0661% ( 14) 00:08:24.675 8620.505 - 8670.917: 93.2329% ( 30) 00:08:24.675 8670.917 - 8721.329: 93.4331% ( 36) 00:08:24.675 8721.329 - 8771.742: 93.5165% ( 15) 00:08:24.675 8771.742 - 8822.154: 93.6165% ( 18) 00:08:24.675 8822.154 - 8872.566: 93.7111% ( 17) 00:08:24.675 8872.566 - 8922.978: 93.8112% ( 18) 00:08:24.675 8922.978 - 8973.391: 94.0725% ( 47) 00:08:24.676 8973.391 - 9023.803: 94.3339% ( 47) 00:08:24.676 9023.803 - 9074.215: 94.4006% ( 12) 00:08:24.676 9074.215 - 9124.628: 94.4451% ( 8) 00:08:24.676 9124.628 - 9175.040: 94.5285% ( 15) 00:08:24.676 9175.040 - 9225.452: 94.6119% ( 15) 00:08:24.676 9225.452 - 9275.865: 94.8176% ( 37) 00:08:24.676 9275.865 - 9326.277: 94.9399% ( 22) 00:08:24.676 9326.277 - 9376.689: 95.0011% ( 11) 00:08:24.676 9376.689 - 9427.102: 95.0678% ( 12) 00:08:24.676 9427.102 - 9477.514: 95.1234% ( 10) 00:08:24.676 9477.514 - 9527.926: 95.1735% ( 9) 00:08:24.676 9527.926 - 9578.338: 95.2402% ( 12) 00:08:24.676 9578.338 - 9628.751: 95.3236% ( 15) 00:08:24.676 9628.751 - 9679.163: 95.4515% ( 23) 00:08:24.676 9679.163 - 9729.575: 95.5627% ( 20) 00:08:24.676 9729.575 - 9779.988: 95.7518% ( 34) 00:08:24.676 9779.988 - 9830.400: 95.8852% ( 24) 00:08:24.676 9830.400 - 9880.812: 95.9575% ( 13) 00:08:24.676 9880.812 - 9931.225: 96.0298% ( 13) 00:08:24.676 9931.225 - 9981.637: 96.0854% ( 10) 00:08:24.676 9981.637 - 10032.049: 96.1466% ( 11) 00:08:24.676 10032.049 - 10082.462: 96.2022% ( 10) 00:08:24.676 10082.462 - 10132.874: 96.2522% ( 9) 00:08:24.676 10132.874 - 10183.286: 96.3023% ( 9) 00:08:24.676 10183.286 - 10233.698: 96.3468% ( 8) 00:08:24.676 10233.698 - 10284.111: 96.3746% ( 5) 00:08:24.676 10284.111 - 10334.523: 96.4024% ( 5) 00:08:24.676 10334.523 - 10384.935: 96.4190% ( 3) 00:08:24.676 10384.935 - 10435.348: 96.4413% ( 4) 00:08:24.676 10536.172 - 10586.585: 96.4524% ( 2) 00:08:24.676 10586.585 - 10636.997: 96.4746% ( 4) 00:08:24.676 10636.997 - 10687.409: 96.5024% ( 5) 00:08:24.676 10687.409 - 10737.822: 96.7137% ( 38) 00:08:24.676 10737.822 - 10788.234: 96.7471% ( 6) 00:08:24.676 10788.234 - 10838.646: 96.7638% ( 3) 00:08:24.676 10838.646 - 10889.058: 96.7805% ( 3) 00:08:24.676 10889.058 - 10939.471: 96.7972% ( 3) 00:08:24.676 11040.295 - 11090.708: 96.8027% ( 1) 00:08:24.676 11090.708 - 11141.120: 96.8194% ( 3) 00:08:24.676 11141.120 - 11191.532: 96.8639% ( 8) 00:08:24.676 11191.532 - 11241.945: 96.8972% ( 6) 00:08:24.676 11241.945 - 11292.357: 96.9250% ( 5) 00:08:24.676 11292.357 - 11342.769: 96.9695% ( 8) 00:08:24.676 11342.769 - 11393.182: 96.9973% ( 5) 00:08:24.676 11393.182 - 11443.594: 97.0363% ( 7) 00:08:24.676 11443.594 - 11494.006: 97.0919% ( 10) 00:08:24.676 11494.006 - 11544.418: 97.1197% ( 5) 00:08:24.676 11544.418 - 11594.831: 97.1586% ( 7) 00:08:24.676 11594.831 - 11645.243: 97.2086% ( 9) 00:08:24.676 11645.243 - 11695.655: 97.2531% ( 8) 00:08:24.676 11695.655 - 11746.068: 97.3032% ( 9) 00:08:24.676 11746.068 - 11796.480: 97.3254% ( 4) 00:08:24.676 11796.480 - 11846.892: 97.3476% ( 4) 00:08:24.676 11846.892 - 11897.305: 97.3643% ( 3) 00:08:24.676 11897.305 - 11947.717: 97.3866% ( 4) 00:08:24.676 11947.717 - 11998.129: 97.4032% ( 3) 00:08:24.676 11998.129 - 12048.542: 97.4199% ( 3) 00:08:24.676 12048.542 - 12098.954: 97.4422% ( 4) 00:08:24.676 12098.954 - 12149.366: 97.4644% ( 4) 00:08:24.676 12149.366 - 12199.778: 97.4811% ( 3) 00:08:24.676 12199.778 - 12250.191: 97.4978% ( 3) 00:08:24.676 12250.191 - 12300.603: 97.5089% ( 2) 00:08:24.676 12401.428 - 12451.840: 97.5200% ( 2) 00:08:24.676 12451.840 - 12502.252: 97.5423% ( 4) 00:08:24.676 12502.252 - 12552.665: 97.5812% ( 7) 00:08:24.676 12552.665 - 12603.077: 97.6257% ( 8) 00:08:24.676 12603.077 - 12653.489: 97.7424% ( 21) 00:08:24.676 12653.489 - 12703.902: 97.7814% ( 7) 00:08:24.676 12703.902 - 12754.314: 97.8370% ( 10) 00:08:24.676 12754.314 - 12804.726: 97.8870% ( 9) 00:08:24.676 12804.726 - 12855.138: 97.9315% ( 8) 00:08:24.676 12855.138 - 12905.551: 97.9704% ( 7) 00:08:24.676 12905.551 - 13006.375: 98.0594% ( 16) 00:08:24.676 13006.375 - 13107.200: 98.1928% ( 24) 00:08:24.676 13107.200 - 13208.025: 98.4431% ( 45) 00:08:24.676 13208.025 - 13308.849: 98.5487% ( 19) 00:08:24.676 13308.849 - 13409.674: 98.6321% ( 15) 00:08:24.676 13409.674 - 13510.498: 98.7155% ( 15) 00:08:24.676 13510.498 - 13611.323: 98.7878% ( 13) 00:08:24.676 13611.323 - 13712.148: 98.8601% ( 13) 00:08:24.676 13712.148 - 13812.972: 98.8990% ( 7) 00:08:24.676 13812.972 - 13913.797: 98.9435% ( 8) 00:08:24.676 13913.797 - 14014.622: 99.2160% ( 49) 00:08:24.676 14014.622 - 14115.446: 99.2605% ( 8) 00:08:24.676 14115.446 - 14216.271: 99.2883% ( 5) 00:08:24.676 16333.588 - 16434.412: 99.2994% ( 2) 00:08:24.676 16434.412 - 16535.237: 99.3605% ( 11) 00:08:24.676 16535.237 - 16636.062: 99.4106% ( 9) 00:08:24.676 16636.062 - 16736.886: 99.4662% ( 10) 00:08:24.676 16736.886 - 16837.711: 99.5051% ( 7) 00:08:24.676 16837.711 - 16938.535: 99.5274% ( 4) 00:08:24.676 16938.535 - 17039.360: 99.5496% ( 4) 00:08:24.676 17039.360 - 17140.185: 99.5774% ( 5) 00:08:24.676 17140.185 - 17241.009: 99.6052% ( 5) 00:08:24.676 17241.009 - 17341.834: 99.6330% ( 5) 00:08:24.676 17341.834 - 17442.658: 99.6441% ( 2) 00:08:24.676 19761.625 - 19862.449: 99.6664% ( 4) 00:08:24.676 19862.449 - 19963.274: 99.6886% ( 4) 00:08:24.676 19963.274 - 20064.098: 99.7220% ( 6) 00:08:24.676 20064.098 - 20164.923: 99.7498% ( 5) 00:08:24.676 20164.923 - 20265.748: 99.8665% ( 21) 00:08:24.676 20265.748 - 20366.572: 99.8999% ( 6) 00:08:24.676 20366.572 - 20467.397: 99.9333% ( 6) 00:08:24.676 20467.397 - 20568.222: 99.9555% ( 4) 00:08:24.676 20568.222 - 20669.046: 99.9666% ( 2) 00:08:24.676 20769.871 - 20870.695: 99.9833% ( 3) 00:08:24.676 20870.695 - 20971.520: 100.0000% ( 3) 00:08:24.676 00:08:24.676 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:24.676 ============================================================================== 00:08:24.676 Range in us Cumulative IO count 00:08:24.676 4032.985 - 4058.191: 0.0222% ( 4) 00:08:24.676 4058.191 - 4083.397: 0.0556% ( 6) 00:08:24.676 4083.397 - 4108.603: 0.0890% ( 6) 00:08:24.676 4108.603 - 4133.809: 0.1335% ( 8) 00:08:24.676 4133.809 - 4159.015: 0.2057% ( 13) 00:08:24.676 4159.015 - 4184.222: 0.2613% ( 10) 00:08:24.676 4184.222 - 4209.428: 0.2947% ( 6) 00:08:24.676 4209.428 - 4234.634: 0.3058% ( 2) 00:08:24.676 4234.634 - 4259.840: 0.3114% ( 1) 00:08:24.676 4259.840 - 4285.046: 0.3225% ( 2) 00:08:24.676 4285.046 - 4310.252: 0.3336% ( 2) 00:08:24.676 4310.252 - 4335.458: 0.3392% ( 1) 00:08:24.676 4335.458 - 4360.665: 0.3503% ( 2) 00:08:24.676 4360.665 - 4385.871: 0.3559% ( 1) 00:08:24.676 5747.003 - 5772.209: 0.3726% ( 3) 00:08:24.676 5772.209 - 5797.415: 0.3781% ( 1) 00:08:24.676 5797.415 - 5822.622: 0.3837% ( 1) 00:08:24.676 5822.622 - 5847.828: 0.3948% ( 2) 00:08:24.676 5847.828 - 5873.034: 0.4115% ( 3) 00:08:24.676 5873.034 - 5898.240: 0.4170% ( 1) 00:08:24.676 5898.240 - 5923.446: 0.4282% ( 2) 00:08:24.677 5923.446 - 5948.652: 0.4615% ( 6) 00:08:24.677 5948.652 - 5973.858: 0.5060% ( 8) 00:08:24.677 5973.858 - 5999.065: 0.5727% ( 12) 00:08:24.677 5999.065 - 6024.271: 0.7006% ( 23) 00:08:24.677 6024.271 - 6049.477: 0.8952% ( 35) 00:08:24.677 6049.477 - 6074.683: 1.2177% ( 58) 00:08:24.677 6074.683 - 6099.889: 1.6793% ( 83) 00:08:24.677 6099.889 - 6125.095: 2.1575% ( 86) 00:08:24.677 6125.095 - 6150.302: 2.9026% ( 134) 00:08:24.677 6150.302 - 6175.508: 4.1926% ( 232) 00:08:24.677 6175.508 - 6200.714: 5.4104% ( 219) 00:08:24.677 6200.714 - 6225.920: 6.6670% ( 226) 00:08:24.677 6225.920 - 6251.126: 8.3185% ( 297) 00:08:24.677 6251.126 - 6276.332: 9.7420% ( 256) 00:08:24.677 6276.332 - 6301.538: 11.1488% ( 253) 00:08:24.677 6301.538 - 6326.745: 12.6279% ( 266) 00:08:24.677 6326.745 - 6351.951: 14.5907% ( 353) 00:08:24.677 6351.951 - 6377.157: 16.0587% ( 264) 00:08:24.677 6377.157 - 6402.363: 17.5211% ( 263) 00:08:24.677 6402.363 - 6427.569: 19.0892% ( 282) 00:08:24.677 6427.569 - 6452.775: 20.9631% ( 337) 00:08:24.677 6452.775 - 6503.188: 24.4495% ( 627) 00:08:24.677 6503.188 - 6553.600: 28.2751% ( 688) 00:08:24.677 6553.600 - 6604.012: 32.9904% ( 848) 00:08:24.677 6604.012 - 6654.425: 37.8559% ( 875) 00:08:24.677 6654.425 - 6704.837: 43.2940% ( 978) 00:08:24.677 6704.837 - 6755.249: 48.7378% ( 979) 00:08:24.677 6755.249 - 6805.662: 53.8256% ( 915) 00:08:24.677 6805.662 - 6856.074: 60.0589% ( 1121) 00:08:24.677 6856.074 - 6906.486: 65.0300% ( 894) 00:08:24.677 6906.486 - 6956.898: 69.6953% ( 839) 00:08:24.677 6956.898 - 7007.311: 73.9379% ( 763) 00:08:24.677 7007.311 - 7057.723: 77.5189% ( 644) 00:08:24.677 7057.723 - 7108.135: 79.7931% ( 409) 00:08:24.677 7108.135 - 7158.548: 81.7727% ( 356) 00:08:24.677 7158.548 - 7208.960: 83.4075% ( 294) 00:08:24.677 7208.960 - 7259.372: 84.6697% ( 227) 00:08:24.677 7259.372 - 7309.785: 85.5427% ( 157) 00:08:24.677 7309.785 - 7360.197: 86.2711% ( 131) 00:08:24.677 7360.197 - 7410.609: 87.1886% ( 165) 00:08:24.677 7410.609 - 7461.022: 87.7113% ( 94) 00:08:24.677 7461.022 - 7511.434: 88.0004% ( 52) 00:08:24.677 7511.434 - 7561.846: 88.3285% ( 59) 00:08:24.677 7561.846 - 7612.258: 88.6399% ( 56) 00:08:24.677 7612.258 - 7662.671: 88.9235% ( 51) 00:08:24.677 7662.671 - 7713.083: 89.3683% ( 80) 00:08:24.677 7713.083 - 7763.495: 89.7520% ( 69) 00:08:24.677 7763.495 - 7813.908: 89.9911% ( 43) 00:08:24.677 7813.908 - 7864.320: 90.1468% ( 28) 00:08:24.677 7864.320 - 7914.732: 90.2413% ( 17) 00:08:24.677 7914.732 - 7965.145: 90.3803% ( 25) 00:08:24.677 7965.145 - 8015.557: 90.6361% ( 46) 00:08:24.677 8015.557 - 8065.969: 90.9698% ( 60) 00:08:24.677 8065.969 - 8116.382: 91.2867% ( 57) 00:08:24.677 8116.382 - 8166.794: 91.5925% ( 55) 00:08:24.677 8166.794 - 8217.206: 91.7260% ( 24) 00:08:24.677 8217.206 - 8267.618: 91.8761% ( 27) 00:08:24.677 8267.618 - 8318.031: 92.0596% ( 33) 00:08:24.677 8318.031 - 8368.443: 92.1819% ( 22) 00:08:24.677 8368.443 - 8418.855: 92.2876% ( 19) 00:08:24.677 8418.855 - 8469.268: 92.4766% ( 34) 00:08:24.677 8469.268 - 8519.680: 92.6768% ( 36) 00:08:24.677 8519.680 - 8570.092: 92.7992% ( 22) 00:08:24.677 8570.092 - 8620.505: 93.0661% ( 48) 00:08:24.677 8620.505 - 8670.917: 93.1606% ( 17) 00:08:24.677 8670.917 - 8721.329: 93.2551% ( 17) 00:08:24.677 8721.329 - 8771.742: 93.3608% ( 19) 00:08:24.677 8771.742 - 8822.154: 93.4664% ( 19) 00:08:24.677 8822.154 - 8872.566: 93.5554% ( 16) 00:08:24.677 8872.566 - 8922.978: 93.8501% ( 53) 00:08:24.677 8922.978 - 8973.391: 94.0002% ( 27) 00:08:24.677 8973.391 - 9023.803: 94.1281% ( 23) 00:08:24.677 9023.803 - 9074.215: 94.4284% ( 54) 00:08:24.677 9074.215 - 9124.628: 94.6008% ( 31) 00:08:24.677 9124.628 - 9175.040: 94.7620% ( 29) 00:08:24.677 9175.040 - 9225.452: 94.9121% ( 27) 00:08:24.677 9225.452 - 9275.865: 95.1457% ( 42) 00:08:24.677 9275.865 - 9326.277: 95.3125% ( 30) 00:08:24.677 9326.277 - 9376.689: 95.3848% ( 13) 00:08:24.677 9376.689 - 9427.102: 95.4738% ( 16) 00:08:24.677 9427.102 - 9477.514: 95.5405% ( 12) 00:08:24.677 9477.514 - 9527.926: 95.6016% ( 11) 00:08:24.677 9527.926 - 9578.338: 95.6628% ( 11) 00:08:24.677 9578.338 - 9628.751: 95.7351% ( 13) 00:08:24.677 9628.751 - 9679.163: 95.8574% ( 22) 00:08:24.677 9679.163 - 9729.575: 95.9520% ( 17) 00:08:24.677 9729.575 - 9779.988: 96.0743% ( 22) 00:08:24.677 9779.988 - 9830.400: 96.2133% ( 25) 00:08:24.677 9830.400 - 9880.812: 96.2800% ( 12) 00:08:24.677 9880.812 - 9931.225: 96.3468% ( 12) 00:08:24.677 9931.225 - 9981.637: 96.3912% ( 8) 00:08:24.677 9981.637 - 10032.049: 96.4079% ( 3) 00:08:24.677 10032.049 - 10082.462: 96.4190% ( 2) 00:08:24.677 10082.462 - 10132.874: 96.4246% ( 1) 00:08:24.677 10132.874 - 10183.286: 96.4302% ( 1) 00:08:24.677 10183.286 - 10233.698: 96.4413% ( 2) 00:08:24.677 10687.409 - 10737.822: 96.4524% ( 2) 00:08:24.677 10737.822 - 10788.234: 96.4746% ( 4) 00:08:24.677 10788.234 - 10838.646: 96.4913% ( 3) 00:08:24.677 10838.646 - 10889.058: 96.5136% ( 4) 00:08:24.677 10889.058 - 10939.471: 96.5358% ( 4) 00:08:24.677 10939.471 - 10989.883: 96.5525% ( 3) 00:08:24.677 10989.883 - 11040.295: 96.5914% ( 7) 00:08:24.677 11040.295 - 11090.708: 96.6470% ( 10) 00:08:24.677 11090.708 - 11141.120: 96.7415% ( 17) 00:08:24.677 11141.120 - 11191.532: 96.8083% ( 12) 00:08:24.677 11191.532 - 11241.945: 96.9751% ( 30) 00:08:24.677 11241.945 - 11292.357: 97.0140% ( 7) 00:08:24.677 11292.357 - 11342.769: 97.0418% ( 5) 00:08:24.677 11342.769 - 11393.182: 97.0641% ( 4) 00:08:24.677 11393.182 - 11443.594: 97.0863% ( 4) 00:08:24.677 11443.594 - 11494.006: 97.0974% ( 2) 00:08:24.677 11494.006 - 11544.418: 97.1030% ( 1) 00:08:24.677 11544.418 - 11594.831: 97.1141% ( 2) 00:08:24.677 11594.831 - 11645.243: 97.1252% ( 2) 00:08:24.677 11645.243 - 11695.655: 97.1308% ( 1) 00:08:24.677 11695.655 - 11746.068: 97.1419% ( 2) 00:08:24.677 11746.068 - 11796.480: 97.1530% ( 2) 00:08:24.677 11796.480 - 11846.892: 97.1697% ( 3) 00:08:24.677 11846.892 - 11897.305: 97.1975% ( 5) 00:08:24.677 11897.305 - 11947.717: 97.2086% ( 2) 00:08:24.677 11947.717 - 11998.129: 97.2420% ( 6) 00:08:24.677 11998.129 - 12048.542: 97.3087% ( 12) 00:08:24.677 12048.542 - 12098.954: 97.3699% ( 11) 00:08:24.677 12098.954 - 12149.366: 97.4088% ( 7) 00:08:24.677 12149.366 - 12199.778: 97.4310% ( 4) 00:08:24.677 12199.778 - 12250.191: 97.4477% ( 3) 00:08:24.677 12250.191 - 12300.603: 97.4644% ( 3) 00:08:24.677 12300.603 - 12351.015: 97.4922% ( 5) 00:08:24.677 12351.015 - 12401.428: 97.5089% ( 3) 00:08:24.677 12401.428 - 12451.840: 97.5589% ( 9) 00:08:24.678 12451.840 - 12502.252: 97.5867% ( 5) 00:08:24.678 12502.252 - 12552.665: 97.6479% ( 11) 00:08:24.678 12552.665 - 12603.077: 97.7258% ( 14) 00:08:24.678 12603.077 - 12653.489: 97.8258% ( 18) 00:08:24.678 12653.489 - 12703.902: 98.0427% ( 39) 00:08:24.678 12703.902 - 12754.314: 98.0983% ( 10) 00:08:24.678 12754.314 - 12804.726: 98.1539% ( 10) 00:08:24.678 12804.726 - 12855.138: 98.1706% ( 3) 00:08:24.678 12855.138 - 12905.551: 98.1928% ( 4) 00:08:24.678 12905.551 - 13006.375: 98.2762% ( 15) 00:08:24.678 13006.375 - 13107.200: 98.3708% ( 17) 00:08:24.678 13107.200 - 13208.025: 98.5431% ( 31) 00:08:24.678 13208.025 - 13308.849: 98.6988% ( 28) 00:08:24.678 13308.849 - 13409.674: 98.8101% ( 20) 00:08:24.678 13409.674 - 13510.498: 98.8379% ( 5) 00:08:24.678 13510.498 - 13611.323: 98.8601% ( 4) 00:08:24.678 13611.323 - 13712.148: 98.8823% ( 4) 00:08:24.678 13712.148 - 13812.972: 98.9213% ( 7) 00:08:24.678 13812.972 - 13913.797: 98.9657% ( 8) 00:08:24.678 13913.797 - 14014.622: 98.9991% ( 6) 00:08:24.678 14014.622 - 14115.446: 99.0269% ( 5) 00:08:24.678 14115.446 - 14216.271: 99.0492% ( 4) 00:08:24.678 14216.271 - 14317.095: 99.0881% ( 7) 00:08:24.678 14317.095 - 14417.920: 99.1437% ( 10) 00:08:24.678 14417.920 - 14518.745: 99.1882% ( 8) 00:08:24.678 14518.745 - 14619.569: 99.2827% ( 17) 00:08:24.678 14619.569 - 14720.394: 99.2883% ( 1) 00:08:24.678 15627.815 - 15728.640: 99.3105% ( 4) 00:08:24.678 15728.640 - 15829.465: 99.3494% ( 7) 00:08:24.678 15829.465 - 15930.289: 99.4050% ( 10) 00:08:24.678 15930.289 - 16031.114: 99.4718% ( 12) 00:08:24.678 16031.114 - 16131.938: 99.4996% ( 5) 00:08:24.678 16131.938 - 16232.763: 99.5274% ( 5) 00:08:24.678 16232.763 - 16333.588: 99.5496% ( 4) 00:08:24.678 16333.588 - 16434.412: 99.5774% ( 5) 00:08:24.678 16434.412 - 16535.237: 99.5996% ( 4) 00:08:24.678 16535.237 - 16636.062: 99.6274% ( 5) 00:08:24.678 16636.062 - 16736.886: 99.6441% ( 3) 00:08:24.678 19358.326 - 19459.151: 99.6497% ( 1) 00:08:24.678 19459.151 - 19559.975: 99.7943% ( 26) 00:08:24.678 19559.975 - 19660.800: 99.9555% ( 29) 00:08:24.678 19660.800 - 19761.625: 99.9666% ( 2) 00:08:24.678 20164.923 - 20265.748: 99.9778% ( 2) 00:08:24.678 20669.046 - 20769.871: 99.9889% ( 2) 00:08:24.678 20769.871 - 20870.695: 100.0000% ( 2) 00:08:24.678 00:08:24.678 14:16:06 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:24.678 00:08:24.678 real 0m2.440s 00:08:24.678 user 0m2.148s 00:08:24.678 sys 0m0.192s 00:08:24.678 14:16:06 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.678 ************************************ 00:08:24.678 14:16:06 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:24.678 END TEST nvme_perf 00:08:24.678 ************************************ 00:08:24.678 14:16:06 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:24.678 14:16:06 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:24.678 14:16:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.678 14:16:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.678 ************************************ 00:08:24.678 START TEST nvme_hello_world 00:08:24.678 ************************************ 00:08:24.678 14:16:06 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:24.937 Initializing NVMe Controllers 00:08:24.937 Attached to 0000:00:13.0 00:08:24.937 Namespace ID: 1 size: 1GB 00:08:24.937 Attached to 0000:00:10.0 00:08:24.937 Namespace ID: 1 size: 6GB 00:08:24.937 Attached to 0000:00:11.0 00:08:24.937 Namespace ID: 1 size: 5GB 00:08:24.937 Attached to 0000:00:12.0 00:08:24.937 Namespace ID: 1 size: 4GB 00:08:24.937 Namespace ID: 2 size: 4GB 00:08:24.937 Namespace ID: 3 size: 4GB 00:08:24.937 Initialization complete. 00:08:24.937 INFO: using host memory buffer for IO 00:08:24.937 Hello world! 00:08:24.937 INFO: using host memory buffer for IO 00:08:24.937 Hello world! 00:08:24.937 INFO: using host memory buffer for IO 00:08:24.937 Hello world! 00:08:24.937 INFO: using host memory buffer for IO 00:08:24.937 Hello world! 00:08:24.937 INFO: using host memory buffer for IO 00:08:24.937 Hello world! 00:08:24.937 INFO: using host memory buffer for IO 00:08:24.937 Hello world! 00:08:24.937 00:08:24.937 real 0m0.182s 00:08:24.937 user 0m0.065s 00:08:24.937 sys 0m0.081s 00:08:24.937 14:16:06 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.937 ************************************ 00:08:24.937 END TEST nvme_hello_world 00:08:24.937 ************************************ 00:08:24.937 14:16:06 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:24.937 14:16:06 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:24.937 14:16:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.937 14:16:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.937 14:16:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.937 ************************************ 00:08:24.937 START TEST nvme_sgl 00:08:24.937 ************************************ 00:08:24.937 14:16:06 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:25.196 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:25.196 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:25.196 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:25.196 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:25.196 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:25.196 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:25.196 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:25.196 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:25.196 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:25.196 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:25.196 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:25.196 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:25.196 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:25.196 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:25.196 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:25.196 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:25.196 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:25.196 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:25.196 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:25.196 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:25.196 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:25.196 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:25.196 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:25.196 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:25.196 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:25.196 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:25.196 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:25.196 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:25.196 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:25.196 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:25.196 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:25.196 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:25.196 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:25.196 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:25.196 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:25.196 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:25.196 NVMe Readv/Writev Request test 00:08:25.196 Attached to 0000:00:13.0 00:08:25.196 Attached to 0000:00:10.0 00:08:25.196 Attached to 0000:00:11.0 00:08:25.196 Attached to 0000:00:12.0 00:08:25.196 0000:00:10.0: build_io_request_2 test passed 00:08:25.196 0000:00:10.0: build_io_request_4 test passed 00:08:25.196 0000:00:10.0: build_io_request_5 test passed 00:08:25.196 0000:00:10.0: build_io_request_6 test passed 00:08:25.196 0000:00:10.0: build_io_request_7 test passed 00:08:25.196 0000:00:10.0: build_io_request_10 test passed 00:08:25.196 0000:00:11.0: build_io_request_2 test passed 00:08:25.196 0000:00:11.0: build_io_request_4 test passed 00:08:25.196 0000:00:11.0: build_io_request_5 test passed 00:08:25.196 0000:00:11.0: build_io_request_6 test passed 00:08:25.196 0000:00:11.0: build_io_request_7 test passed 00:08:25.196 0000:00:11.0: build_io_request_10 test passed 00:08:25.196 Cleaning up... 00:08:25.196 00:08:25.196 real 0m0.249s 00:08:25.196 user 0m0.123s 00:08:25.196 sys 0m0.087s 00:08:25.196 14:16:06 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.196 14:16:06 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:25.196 ************************************ 00:08:25.196 END TEST nvme_sgl 00:08:25.196 ************************************ 00:08:25.196 14:16:06 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:25.196 14:16:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.196 14:16:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.196 14:16:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.196 ************************************ 00:08:25.196 START TEST nvme_e2edp 00:08:25.196 ************************************ 00:08:25.196 14:16:06 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:25.455 NVMe Write/Read with End-to-End data protection test 00:08:25.455 Attached to 0000:00:13.0 00:08:25.455 Attached to 0000:00:10.0 00:08:25.455 Attached to 0000:00:11.0 00:08:25.455 Attached to 0000:00:12.0 00:08:25.455 Cleaning up... 00:08:25.455 00:08:25.455 real 0m0.184s 00:08:25.455 user 0m0.052s 00:08:25.455 sys 0m0.086s 00:08:25.455 14:16:07 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.455 14:16:07 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:25.455 ************************************ 00:08:25.455 END TEST nvme_e2edp 00:08:25.455 ************************************ 00:08:25.455 14:16:07 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:25.455 14:16:07 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.455 14:16:07 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.455 14:16:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.455 ************************************ 00:08:25.455 START TEST nvme_reserve 00:08:25.455 ************************************ 00:08:25.455 14:16:07 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:25.714 ===================================================== 00:08:25.714 NVMe Controller at PCI bus 0, device 19, function 0 00:08:25.714 ===================================================== 00:08:25.714 Reservations: Not Supported 00:08:25.714 ===================================================== 00:08:25.714 NVMe Controller at PCI bus 0, device 16, function 0 00:08:25.714 ===================================================== 00:08:25.714 Reservations: Not Supported 00:08:25.714 ===================================================== 00:08:25.714 NVMe Controller at PCI bus 0, device 17, function 0 00:08:25.714 ===================================================== 00:08:25.714 Reservations: Not Supported 00:08:25.714 ===================================================== 00:08:25.714 NVMe Controller at PCI bus 0, device 18, function 0 00:08:25.714 ===================================================== 00:08:25.714 Reservations: Not Supported 00:08:25.714 Reservation test passed 00:08:25.714 00:08:25.714 real 0m0.177s 00:08:25.714 user 0m0.061s 00:08:25.714 sys 0m0.078s 00:08:25.714 14:16:07 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.714 14:16:07 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:25.714 ************************************ 00:08:25.714 END TEST nvme_reserve 00:08:25.714 ************************************ 00:08:25.714 14:16:07 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:25.714 14:16:07 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.714 14:16:07 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.714 14:16:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.714 ************************************ 00:08:25.714 START TEST nvme_err_injection 00:08:25.714 ************************************ 00:08:25.714 14:16:07 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:25.972 NVMe Error Injection test 00:08:25.972 Attached to 0000:00:13.0 00:08:25.972 Attached to 0000:00:10.0 00:08:25.972 Attached to 0000:00:11.0 00:08:25.972 Attached to 0000:00:12.0 00:08:25.972 0000:00:11.0: get features failed as expected 00:08:25.972 0000:00:12.0: get features failed as expected 00:08:25.972 0000:00:13.0: get features failed as expected 00:08:25.972 0000:00:10.0: get features failed as expected 00:08:25.972 0000:00:13.0: get features successfully as expected 00:08:25.972 0000:00:10.0: get features successfully as expected 00:08:25.972 0000:00:11.0: get features successfully as expected 00:08:25.972 0000:00:12.0: get features successfully as expected 00:08:25.972 0000:00:13.0: read failed as expected 00:08:25.972 0000:00:10.0: read failed as expected 00:08:25.972 0000:00:11.0: read failed as expected 00:08:25.972 0000:00:12.0: read failed as expected 00:08:25.972 0000:00:13.0: read successfully as expected 00:08:25.972 0000:00:10.0: read successfully as expected 00:08:25.972 0000:00:11.0: read successfully as expected 00:08:25.972 0000:00:12.0: read successfully as expected 00:08:25.972 Cleaning up... 00:08:25.972 00:08:25.972 real 0m0.188s 00:08:25.972 user 0m0.065s 00:08:25.972 sys 0m0.084s 00:08:25.972 14:16:07 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.972 14:16:07 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:25.972 ************************************ 00:08:25.972 END TEST nvme_err_injection 00:08:25.972 ************************************ 00:08:25.972 14:16:07 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:25.972 14:16:07 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:25.972 14:16:07 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.972 14:16:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.972 ************************************ 00:08:25.972 START TEST nvme_overhead 00:08:25.972 ************************************ 00:08:25.972 14:16:07 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:27.346 Initializing NVMe Controllers 00:08:27.346 Attached to 0000:00:13.0 00:08:27.346 Attached to 0000:00:10.0 00:08:27.346 Attached to 0000:00:11.0 00:08:27.346 Attached to 0000:00:12.0 00:08:27.346 Initialization complete. Launching workers. 00:08:27.346 submit (in ns) avg, min, max = 12194.6, 10430.0, 227468.5 00:08:27.346 complete (in ns) avg, min, max = 7675.5, 7170.0, 132182.3 00:08:27.346 00:08:27.346 Submit histogram 00:08:27.346 ================ 00:08:27.346 Range in us Cumulative Count 00:08:27.346 10.388 - 10.437: 0.0064% ( 1) 00:08:27.346 10.437 - 10.486: 0.0128% ( 1) 00:08:27.346 10.732 - 10.782: 0.0192% ( 1) 00:08:27.346 11.274 - 11.323: 0.0256% ( 1) 00:08:27.346 11.323 - 11.372: 0.0959% ( 11) 00:08:27.346 11.372 - 11.422: 0.9074% ( 127) 00:08:27.346 11.422 - 11.471: 3.2270% ( 363) 00:08:27.346 11.471 - 11.520: 8.5309% ( 830) 00:08:27.346 11.520 - 11.569: 16.3333% ( 1221) 00:08:27.346 11.569 - 11.618: 25.2732% ( 1399) 00:08:27.346 11.618 - 11.668: 34.2897% ( 1411) 00:08:27.346 11.668 - 11.717: 43.3766% ( 1422) 00:08:27.346 11.717 - 11.766: 50.6486% ( 1138) 00:08:27.346 11.766 - 11.815: 57.2113% ( 1027) 00:08:27.346 11.815 - 11.865: 62.1957% ( 780) 00:08:27.346 11.865 - 11.914: 66.2023% ( 627) 00:08:27.346 11.914 - 11.963: 69.6402% ( 538) 00:08:27.346 11.963 - 12.012: 72.8801% ( 507) 00:08:27.346 12.012 - 12.062: 75.8771% ( 469) 00:08:27.346 12.062 - 12.111: 78.5290% ( 415) 00:08:27.346 12.111 - 12.160: 81.0978% ( 402) 00:08:27.346 12.160 - 12.209: 83.4558% ( 369) 00:08:27.346 12.209 - 12.258: 85.4879% ( 318) 00:08:27.346 12.258 - 12.308: 87.4625% ( 309) 00:08:27.346 12.308 - 12.357: 88.8683% ( 220) 00:08:27.346 12.357 - 12.406: 90.2614% ( 218) 00:08:27.346 12.406 - 12.455: 91.5394% ( 200) 00:08:27.346 12.455 - 12.505: 92.4660% ( 145) 00:08:27.346 12.505 - 12.554: 93.1689% ( 110) 00:08:27.346 12.554 - 12.603: 93.6929% ( 82) 00:08:27.346 12.603 - 12.702: 94.3511% ( 103) 00:08:27.346 12.702 - 12.800: 94.9134% ( 88) 00:08:27.346 12.800 - 12.898: 95.2393% ( 51) 00:08:27.346 12.898 - 12.997: 95.4438% ( 32) 00:08:27.346 12.997 - 13.095: 95.5333% ( 14) 00:08:27.346 13.095 - 13.194: 95.5844% ( 8) 00:08:27.346 13.194 - 13.292: 95.6419% ( 9) 00:08:27.346 13.292 - 13.391: 95.6547% ( 2) 00:08:27.346 13.391 - 13.489: 95.6675% ( 2) 00:08:27.346 13.489 - 13.588: 95.7505% ( 13) 00:08:27.346 13.588 - 13.686: 95.8656% ( 18) 00:08:27.346 13.686 - 13.785: 95.9358% ( 11) 00:08:27.346 13.785 - 13.883: 96.0253% ( 14) 00:08:27.346 13.883 - 13.982: 96.0764% ( 8) 00:08:27.346 13.982 - 14.080: 96.1659% ( 14) 00:08:27.346 14.080 - 14.178: 96.2170% ( 8) 00:08:27.346 14.178 - 14.277: 96.3512% ( 21) 00:08:27.346 14.277 - 14.375: 96.4662% ( 18) 00:08:27.346 14.375 - 14.474: 96.5813% ( 18) 00:08:27.346 14.474 - 14.572: 96.6707% ( 14) 00:08:27.346 14.572 - 14.671: 96.7538% ( 13) 00:08:27.346 14.671 - 14.769: 96.8752% ( 19) 00:08:27.346 14.769 - 14.868: 96.9838% ( 17) 00:08:27.346 14.868 - 14.966: 97.0733% ( 14) 00:08:27.346 14.966 - 15.065: 97.1116% ( 6) 00:08:27.346 15.065 - 15.163: 97.1628% ( 8) 00:08:27.346 15.163 - 15.262: 97.1883% ( 4) 00:08:27.346 15.262 - 15.360: 97.2139% ( 4) 00:08:27.346 15.360 - 15.458: 97.2522% ( 6) 00:08:27.346 15.458 - 15.557: 97.2970% ( 7) 00:08:27.346 15.557 - 15.655: 97.3417% ( 7) 00:08:27.346 15.655 - 15.754: 97.3672% ( 4) 00:08:27.346 15.754 - 15.852: 97.3992% ( 5) 00:08:27.346 15.852 - 15.951: 97.4184% ( 3) 00:08:27.346 15.951 - 16.049: 97.4695% ( 8) 00:08:27.346 16.049 - 16.148: 97.4950% ( 4) 00:08:27.346 16.148 - 16.246: 97.5270% ( 5) 00:08:27.346 16.246 - 16.345: 97.5462% ( 3) 00:08:27.346 16.345 - 16.443: 97.5781% ( 5) 00:08:27.346 16.443 - 16.542: 97.5909% ( 2) 00:08:27.346 16.542 - 16.640: 97.5973% ( 1) 00:08:27.346 16.640 - 16.738: 97.6165% ( 3) 00:08:27.347 16.738 - 16.837: 97.6420% ( 4) 00:08:27.347 16.837 - 16.935: 97.6548% ( 2) 00:08:27.347 16.935 - 17.034: 97.6676% ( 2) 00:08:27.347 17.034 - 17.132: 97.6740% ( 1) 00:08:27.347 17.132 - 17.231: 97.6868% ( 2) 00:08:27.347 17.231 - 17.329: 97.6931% ( 1) 00:08:27.347 17.329 - 17.428: 97.7187% ( 4) 00:08:27.347 17.428 - 17.526: 97.7570% ( 6) 00:08:27.347 17.526 - 17.625: 97.7890% ( 5) 00:08:27.347 17.625 - 17.723: 97.8401% ( 8) 00:08:27.347 17.723 - 17.822: 97.9040% ( 10) 00:08:27.347 17.822 - 17.920: 97.9807% ( 12) 00:08:27.347 17.920 - 18.018: 98.0446% ( 10) 00:08:27.347 18.018 - 18.117: 98.0957% ( 8) 00:08:27.347 18.117 - 18.215: 98.1532% ( 9) 00:08:27.347 18.215 - 18.314: 98.2491% ( 15) 00:08:27.347 18.314 - 18.412: 98.3258% ( 12) 00:08:27.347 18.412 - 18.511: 98.4216% ( 15) 00:08:27.347 18.511 - 18.609: 98.4472% ( 4) 00:08:27.347 18.609 - 18.708: 98.4791% ( 5) 00:08:27.347 18.708 - 18.806: 98.5175% ( 6) 00:08:27.347 18.806 - 18.905: 98.5686% ( 8) 00:08:27.347 18.905 - 19.003: 98.6197% ( 8) 00:08:27.347 19.003 - 19.102: 98.6645% ( 7) 00:08:27.347 19.102 - 19.200: 98.7028% ( 6) 00:08:27.347 19.200 - 19.298: 98.7284% ( 4) 00:08:27.347 19.298 - 19.397: 98.7603% ( 5) 00:08:27.347 19.397 - 19.495: 98.7667% ( 1) 00:08:27.347 19.495 - 19.594: 98.7986% ( 5) 00:08:27.347 19.594 - 19.692: 98.8114% ( 2) 00:08:27.347 19.692 - 19.791: 98.8178% ( 1) 00:08:27.347 19.889 - 19.988: 98.8306% ( 2) 00:08:27.347 19.988 - 20.086: 98.8498% ( 3) 00:08:27.347 20.086 - 20.185: 98.8625% ( 2) 00:08:27.347 20.283 - 20.382: 98.8753% ( 2) 00:08:27.347 20.382 - 20.480: 98.9137% ( 6) 00:08:27.347 20.480 - 20.578: 98.9392% ( 4) 00:08:27.347 20.578 - 20.677: 98.9456% ( 1) 00:08:27.347 20.874 - 20.972: 98.9584% ( 2) 00:08:27.347 20.972 - 21.071: 98.9648% ( 1) 00:08:27.347 21.366 - 21.465: 98.9712% ( 1) 00:08:27.347 21.465 - 21.563: 98.9776% ( 1) 00:08:27.347 21.563 - 21.662: 98.9840% ( 1) 00:08:27.347 21.662 - 21.760: 98.9967% ( 2) 00:08:27.347 21.760 - 21.858: 99.0095% ( 2) 00:08:27.347 21.858 - 21.957: 99.0223% ( 2) 00:08:27.347 22.154 - 22.252: 99.0287% ( 1) 00:08:27.347 22.449 - 22.548: 99.0351% ( 1) 00:08:27.347 22.548 - 22.646: 99.0415% ( 1) 00:08:27.347 22.646 - 22.745: 99.0543% ( 2) 00:08:27.347 23.040 - 23.138: 99.0606% ( 1) 00:08:27.347 23.138 - 23.237: 99.0670% ( 1) 00:08:27.347 23.729 - 23.828: 99.0734% ( 1) 00:08:27.347 24.123 - 24.222: 99.0798% ( 1) 00:08:27.347 24.222 - 24.320: 99.0862% ( 1) 00:08:27.347 24.517 - 24.615: 99.0990% ( 2) 00:08:27.347 26.585 - 26.782: 99.1118% ( 2) 00:08:27.347 27.175 - 27.372: 99.1182% ( 1) 00:08:27.347 28.554 - 28.751: 99.1309% ( 2) 00:08:27.347 29.145 - 29.342: 99.1373% ( 1) 00:08:27.347 31.311 - 31.508: 99.1501% ( 2) 00:08:27.347 31.508 - 31.705: 99.3099% ( 25) 00:08:27.347 31.705 - 31.902: 99.5655% ( 40) 00:08:27.347 31.902 - 32.098: 99.6869% ( 19) 00:08:27.347 32.098 - 32.295: 99.8019% ( 18) 00:08:27.347 32.295 - 32.492: 99.8466% ( 7) 00:08:27.347 32.492 - 32.689: 99.8722% ( 4) 00:08:27.347 32.689 - 32.886: 99.8978% ( 4) 00:08:27.347 32.886 - 33.083: 99.9041% ( 1) 00:08:27.347 33.083 - 33.280: 99.9169% ( 2) 00:08:27.347 33.280 - 33.477: 99.9233% ( 1) 00:08:27.347 33.477 - 33.674: 99.9361% ( 2) 00:08:27.347 35.249 - 35.446: 99.9425% ( 1) 00:08:27.347 38.400 - 38.597: 99.9489% ( 1) 00:08:27.347 44.505 - 44.702: 99.9553% ( 1) 00:08:27.347 47.852 - 48.049: 99.9617% ( 1) 00:08:27.347 48.049 - 48.246: 99.9680% ( 1) 00:08:27.347 49.231 - 49.428: 99.9744% ( 1) 00:08:27.347 50.806 - 51.200: 99.9808% ( 1) 00:08:27.347 61.834 - 62.228: 99.9872% ( 1) 00:08:27.347 70.105 - 70.498: 99.9936% ( 1) 00:08:27.347 226.855 - 228.431: 100.0000% ( 1) 00:08:27.347 00:08:27.347 Complete histogram 00:08:27.347 ================== 00:08:27.347 Range in us Cumulative Count 00:08:27.347 7.138 - 7.188: 0.0128% ( 2) 00:08:27.347 7.188 - 7.237: 0.1789% ( 26) 00:08:27.347 7.237 - 7.286: 4.3070% ( 646) 00:08:27.347 7.286 - 7.335: 20.1035% ( 2472) 00:08:27.347 7.335 - 7.385: 46.0413% ( 4059) 00:08:27.347 7.385 - 7.434: 70.5285% ( 3832) 00:08:27.347 7.434 - 7.483: 84.3952% ( 2170) 00:08:27.347 7.483 - 7.532: 90.5809% ( 968) 00:08:27.347 7.532 - 7.582: 93.5331% ( 462) 00:08:27.347 7.582 - 7.631: 94.9390% ( 220) 00:08:27.347 7.631 - 7.680: 95.6419% ( 110) 00:08:27.347 7.680 - 7.729: 95.9870% ( 54) 00:08:27.347 7.729 - 7.778: 96.1978% ( 33) 00:08:27.347 7.778 - 7.828: 96.2937% ( 15) 00:08:27.347 7.828 - 7.877: 96.3384% ( 7) 00:08:27.347 7.877 - 7.926: 96.3576% ( 3) 00:08:27.347 7.926 - 7.975: 96.3959% ( 6) 00:08:27.347 7.975 - 8.025: 96.4407% ( 7) 00:08:27.347 8.025 - 8.074: 96.4662% ( 4) 00:08:27.347 8.074 - 8.123: 96.5046% ( 6) 00:08:27.347 8.123 - 8.172: 96.5301% ( 4) 00:08:27.347 8.172 - 8.222: 96.5685% ( 6) 00:08:27.347 8.222 - 8.271: 96.6388% ( 11) 00:08:27.347 8.271 - 8.320: 96.6707% ( 5) 00:08:27.347 8.320 - 8.369: 96.7282% ( 9) 00:08:27.347 8.369 - 8.418: 96.8560% ( 20) 00:08:27.347 8.418 - 8.468: 96.9966% ( 22) 00:08:27.347 8.468 - 8.517: 97.1244% ( 20) 00:08:27.347 8.517 - 8.566: 97.2011% ( 12) 00:08:27.347 8.566 - 8.615: 97.2394% ( 6) 00:08:27.347 8.615 - 8.665: 97.2842% ( 7) 00:08:27.347 8.665 - 8.714: 97.3161% ( 5) 00:08:27.347 8.714 - 8.763: 97.3417% ( 4) 00:08:27.347 8.862 - 8.911: 97.3545% ( 2) 00:08:27.347 8.911 - 8.960: 97.3609% ( 1) 00:08:27.347 9.157 - 9.206: 97.3672% ( 1) 00:08:27.347 9.206 - 9.255: 97.3736% ( 1) 00:08:27.347 9.255 - 9.305: 97.3800% ( 1) 00:08:27.347 9.452 - 9.502: 97.3864% ( 1) 00:08:27.347 9.649 - 9.698: 97.3928% ( 1) 00:08:27.347 9.748 - 9.797: 97.4056% ( 2) 00:08:27.347 9.797 - 9.846: 97.4120% ( 1) 00:08:27.347 9.895 - 9.945: 97.4248% ( 2) 00:08:27.347 9.945 - 9.994: 97.4311% ( 1) 00:08:27.347 10.092 - 10.142: 97.4375% ( 1) 00:08:27.347 10.191 - 10.240: 97.4439% ( 1) 00:08:27.347 10.240 - 10.289: 97.4503% ( 1) 00:08:27.347 10.289 - 10.338: 97.4631% ( 2) 00:08:27.347 10.338 - 10.388: 97.4759% ( 2) 00:08:27.347 10.437 - 10.486: 97.4950% ( 3) 00:08:27.347 10.486 - 10.535: 97.5014% ( 1) 00:08:27.347 10.535 - 10.585: 97.5078% ( 1) 00:08:27.347 10.585 - 10.634: 97.5206% ( 2) 00:08:27.347 10.831 - 10.880: 97.5270% ( 1) 00:08:27.347 10.880 - 10.929: 97.5334% ( 1) 00:08:27.347 10.978 - 11.028: 97.5398% ( 1) 00:08:27.347 11.028 - 11.077: 97.5462% ( 1) 00:08:27.347 11.323 - 11.372: 97.5526% ( 1) 00:08:27.347 11.766 - 11.815: 97.5589% ( 1) 00:08:27.347 11.865 - 11.914: 97.5653% ( 1) 00:08:27.347 12.012 - 12.062: 97.5717% ( 1) 00:08:27.347 12.111 - 12.160: 97.5781% ( 1) 00:08:27.347 12.209 - 12.258: 97.5845% ( 1) 00:08:27.347 12.258 - 12.308: 97.5973% ( 2) 00:08:27.347 12.357 - 12.406: 97.6101% ( 2) 00:08:27.347 12.406 - 12.455: 97.6165% ( 1) 00:08:27.347 12.554 - 12.603: 97.6229% ( 1) 00:08:27.347 12.702 - 12.800: 97.6420% ( 3) 00:08:27.347 12.800 - 12.898: 97.6612% ( 3) 00:08:27.347 12.898 - 12.997: 97.6676% ( 1) 00:08:27.347 13.095 - 13.194: 97.6804% ( 2) 00:08:27.347 13.194 - 13.292: 97.7123% ( 5) 00:08:27.347 13.292 - 13.391: 97.7826% ( 11) 00:08:27.347 13.391 - 13.489: 97.8273% ( 7) 00:08:27.347 13.489 - 13.588: 97.8976% ( 11) 00:08:27.347 13.588 - 13.686: 97.9360% ( 6) 00:08:27.347 13.686 - 13.785: 98.0127% ( 12) 00:08:27.347 13.785 - 13.883: 98.0893% ( 12) 00:08:27.347 13.883 - 13.982: 98.1724% ( 13) 00:08:27.347 13.982 - 14.080: 98.2427% ( 11) 00:08:27.347 14.080 - 14.178: 98.3258% ( 13) 00:08:27.347 14.178 - 14.277: 98.3897% ( 10) 00:08:27.347 14.277 - 14.375: 98.4472% ( 9) 00:08:27.347 14.375 - 14.474: 98.5430% ( 15) 00:08:27.347 14.474 - 14.572: 98.6133% ( 11) 00:08:27.347 14.572 - 14.671: 98.6836% ( 11) 00:08:27.347 14.671 - 14.769: 98.7347% ( 8) 00:08:27.347 14.769 - 14.868: 98.7731% ( 6) 00:08:27.347 14.868 - 14.966: 98.7923% ( 3) 00:08:27.347 14.966 - 15.065: 98.7986% ( 1) 00:08:27.347 15.065 - 15.163: 98.8050% ( 1) 00:08:27.347 15.163 - 15.262: 98.8434% ( 6) 00:08:27.347 15.262 - 15.360: 98.8689% ( 4) 00:08:27.347 15.360 - 15.458: 98.8753% ( 1) 00:08:27.347 15.458 - 15.557: 98.8881% ( 2) 00:08:27.347 15.557 - 15.655: 98.9073% ( 3) 00:08:27.347 15.655 - 15.754: 98.9201% ( 2) 00:08:27.347 15.754 - 15.852: 98.9264% ( 1) 00:08:27.347 15.852 - 15.951: 98.9456% ( 3) 00:08:27.347 15.951 - 16.049: 98.9584% ( 2) 00:08:27.347 16.049 - 16.148: 98.9648% ( 1) 00:08:27.347 16.246 - 16.345: 98.9712% ( 1) 00:08:27.347 16.542 - 16.640: 98.9776% ( 1) 00:08:27.348 16.640 - 16.738: 98.9840% ( 1) 00:08:27.348 16.935 - 17.034: 98.9904% ( 1) 00:08:27.348 17.132 - 17.231: 98.9967% ( 1) 00:08:27.348 17.231 - 17.329: 99.0031% ( 1) 00:08:27.348 17.428 - 17.526: 99.0159% ( 2) 00:08:27.348 17.723 - 17.822: 99.0223% ( 1) 00:08:27.348 17.920 - 18.018: 99.0287% ( 1) 00:08:27.348 18.314 - 18.412: 99.0415% ( 2) 00:08:27.348 18.511 - 18.609: 99.0479% ( 1) 00:08:27.348 19.298 - 19.397: 99.0606% ( 2) 00:08:27.348 19.594 - 19.692: 99.0734% ( 2) 00:08:27.348 20.185 - 20.283: 99.0798% ( 1) 00:08:27.348 20.874 - 20.972: 99.0862% ( 1) 00:08:27.348 20.972 - 21.071: 99.0926% ( 1) 00:08:27.348 21.662 - 21.760: 99.0990% ( 1) 00:08:27.348 21.760 - 21.858: 99.1054% ( 1) 00:08:27.348 21.858 - 21.957: 99.2076% ( 16) 00:08:27.348 21.957 - 22.055: 99.4952% ( 45) 00:08:27.348 22.055 - 22.154: 99.7124% ( 34) 00:08:27.348 22.154 - 22.252: 99.7700% ( 9) 00:08:27.348 22.252 - 22.351: 99.8211% ( 8) 00:08:27.348 22.351 - 22.449: 99.8339% ( 2) 00:08:27.348 22.449 - 22.548: 99.8530% ( 3) 00:08:27.348 22.548 - 22.646: 99.8594% ( 1) 00:08:27.348 22.646 - 22.745: 99.8658% ( 1) 00:08:27.348 22.942 - 23.040: 99.8722% ( 1) 00:08:27.348 23.335 - 23.434: 99.8786% ( 1) 00:08:27.348 24.222 - 24.320: 99.8850% ( 1) 00:08:27.348 24.320 - 24.418: 99.8914% ( 1) 00:08:27.348 24.714 - 24.812: 99.8978% ( 1) 00:08:27.348 25.994 - 26.191: 99.9041% ( 1) 00:08:27.348 27.569 - 27.766: 99.9105% ( 1) 00:08:27.348 28.160 - 28.357: 99.9169% ( 1) 00:08:27.348 28.554 - 28.751: 99.9233% ( 1) 00:08:27.348 30.523 - 30.720: 99.9297% ( 1) 00:08:27.348 30.720 - 30.917: 99.9361% ( 1) 00:08:27.348 34.658 - 34.855: 99.9425% ( 1) 00:08:27.348 37.809 - 38.006: 99.9553% ( 2) 00:08:27.348 38.597 - 38.794: 99.9617% ( 1) 00:08:27.348 38.991 - 39.188: 99.9744% ( 2) 00:08:27.348 39.188 - 39.385: 99.9808% ( 1) 00:08:27.348 43.520 - 43.717: 99.9872% ( 1) 00:08:27.348 48.443 - 48.640: 99.9936% ( 1) 00:08:27.348 131.545 - 132.332: 100.0000% ( 1) 00:08:27.348 00:08:27.348 00:08:27.348 real 0m1.181s 00:08:27.348 user 0m1.045s 00:08:27.348 sys 0m0.094s 00:08:27.348 14:16:08 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.348 14:16:08 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:27.348 ************************************ 00:08:27.348 END TEST nvme_overhead 00:08:27.348 ************************************ 00:08:27.348 14:16:08 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:27.348 14:16:08 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:27.348 14:16:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.348 14:16:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.348 ************************************ 00:08:27.348 START TEST nvme_arbitration 00:08:27.348 ************************************ 00:08:27.348 14:16:08 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:30.628 Initializing NVMe Controllers 00:08:30.628 Attached to 0000:00:13.0 00:08:30.628 Attached to 0000:00:10.0 00:08:30.628 Attached to 0000:00:11.0 00:08:30.628 Attached to 0000:00:12.0 00:08:30.628 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:30.628 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:30.628 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:30.628 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:30.629 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:30.629 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:30.629 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:30.629 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:30.629 Initialization complete. Launching workers. 00:08:30.629 Starting thread on core 1 with urgent priority queue 00:08:30.629 Starting thread on core 2 with urgent priority queue 00:08:30.629 Starting thread on core 3 with urgent priority queue 00:08:30.629 Starting thread on core 0 with urgent priority queue 00:08:30.629 QEMU NVMe Ctrl (12343 ) core 0: 6592.00 IO/s 15.17 secs/100000 ios 00:08:30.629 QEMU NVMe Ctrl (12342 ) core 0: 6592.00 IO/s 15.17 secs/100000 ios 00:08:30.629 QEMU NVMe Ctrl (12340 ) core 1: 6499.00 IO/s 15.39 secs/100000 ios 00:08:30.629 QEMU NVMe Ctrl (12342 ) core 1: 6506.67 IO/s 15.37 secs/100000 ios 00:08:30.629 QEMU NVMe Ctrl (12341 ) core 2: 5931.33 IO/s 16.86 secs/100000 ios 00:08:30.629 QEMU NVMe Ctrl (12342 ) core 3: 6001.33 IO/s 16.66 secs/100000 ios 00:08:30.629 ======================================================== 00:08:30.629 00:08:30.629 00:08:30.629 real 0m3.223s 00:08:30.629 user 0m9.021s 00:08:30.629 sys 0m0.118s 00:08:30.629 14:16:12 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.629 14:16:12 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:30.629 ************************************ 00:08:30.629 END TEST nvme_arbitration 00:08:30.629 ************************************ 00:08:30.629 14:16:12 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:30.629 14:16:12 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:30.629 14:16:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.629 14:16:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.629 ************************************ 00:08:30.629 START TEST nvme_single_aen 00:08:30.629 ************************************ 00:08:30.629 14:16:12 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:30.629 Asynchronous Event Request test 00:08:30.629 Attached to 0000:00:13.0 00:08:30.629 Attached to 0000:00:10.0 00:08:30.629 Attached to 0000:00:11.0 00:08:30.629 Attached to 0000:00:12.0 00:08:30.629 Reset controller to setup AER completions for this process 00:08:30.629 Registering asynchronous event callbacks... 00:08:30.629 Getting orig temperature thresholds of all controllers 00:08:30.629 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.629 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.629 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.629 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.629 Setting all controllers temperature threshold low to trigger AER 00:08:30.629 Waiting for all controllers temperature threshold to be set lower 00:08:30.629 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.629 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:30.629 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.629 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:30.629 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.629 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:30.629 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.629 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:30.629 Waiting for all controllers to trigger AER and reset threshold 00:08:30.629 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.629 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.629 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.629 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.629 Cleaning up... 00:08:30.629 00:08:30.629 real 0m0.199s 00:08:30.629 user 0m0.060s 00:08:30.629 sys 0m0.095s 00:08:30.629 14:16:12 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.629 14:16:12 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:30.629 ************************************ 00:08:30.629 END TEST nvme_single_aen 00:08:30.629 ************************************ 00:08:30.629 14:16:12 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:30.629 14:16:12 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.629 14:16:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.629 14:16:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.629 ************************************ 00:08:30.629 START TEST nvme_doorbell_aers 00:08:30.629 ************************************ 00:08:30.629 14:16:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:30.629 14:16:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:30.629 14:16:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:30.629 14:16:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:30.629 14:16:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:30.629 14:16:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:30.629 14:16:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:30.629 14:16:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:30.629 14:16:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:30.629 14:16:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:30.629 14:16:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:30.629 14:16:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:30.629 14:16:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:30.629 14:16:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:30.886 [2024-11-29 14:16:12.526728] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:08:40.901 Executing: test_write_invalid_db 00:08:40.901 Waiting for AER completion... 00:08:40.901 Failure: test_write_invalid_db 00:08:40.901 00:08:40.901 Executing: test_invalid_db_write_overflow_sq 00:08:40.901 Waiting for AER completion... 00:08:40.901 Failure: test_invalid_db_write_overflow_sq 00:08:40.901 00:08:40.901 Executing: test_invalid_db_write_overflow_cq 00:08:40.901 Waiting for AER completion... 00:08:40.901 Failure: test_invalid_db_write_overflow_cq 00:08:40.901 00:08:40.901 14:16:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:40.901 14:16:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:40.901 [2024-11-29 14:16:22.560239] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:08:50.871 Executing: test_write_invalid_db 00:08:50.871 Waiting for AER completion... 00:08:50.871 Failure: test_write_invalid_db 00:08:50.871 00:08:50.871 Executing: test_invalid_db_write_overflow_sq 00:08:50.871 Waiting for AER completion... 00:08:50.871 Failure: test_invalid_db_write_overflow_sq 00:08:50.871 00:08:50.871 Executing: test_invalid_db_write_overflow_cq 00:08:50.871 Waiting for AER completion... 00:08:50.871 Failure: test_invalid_db_write_overflow_cq 00:08:50.871 00:08:50.871 14:16:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:50.871 14:16:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:50.871 [2024-11-29 14:16:32.591439] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:09:00.909 Executing: test_write_invalid_db 00:09:00.909 Waiting for AER completion... 00:09:00.909 Failure: test_write_invalid_db 00:09:00.909 00:09:00.909 Executing: test_invalid_db_write_overflow_sq 00:09:00.909 Waiting for AER completion... 00:09:00.909 Failure: test_invalid_db_write_overflow_sq 00:09:00.909 00:09:00.909 Executing: test_invalid_db_write_overflow_cq 00:09:00.909 Waiting for AER completion... 00:09:00.909 Failure: test_invalid_db_write_overflow_cq 00:09:00.909 00:09:00.909 14:16:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:00.909 14:16:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:00.909 [2024-11-29 14:16:42.624088] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:09:10.896 Executing: test_write_invalid_db 00:09:10.896 Waiting for AER completion... 00:09:10.896 Failure: test_write_invalid_db 00:09:10.896 00:09:10.896 Executing: test_invalid_db_write_overflow_sq 00:09:10.896 Waiting for AER completion... 00:09:10.896 Failure: test_invalid_db_write_overflow_sq 00:09:10.896 00:09:10.896 Executing: test_invalid_db_write_overflow_cq 00:09:10.896 Waiting for AER completion... 00:09:10.896 Failure: test_invalid_db_write_overflow_cq 00:09:10.896 00:09:10.896 00:09:10.896 real 0m40.182s 00:09:10.896 user 0m34.148s 00:09:10.896 sys 0m5.654s 00:09:10.896 14:16:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.896 14:16:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:10.896 ************************************ 00:09:10.896 END TEST nvme_doorbell_aers 00:09:10.896 ************************************ 00:09:10.896 14:16:52 nvme -- nvme/nvme.sh@97 -- # uname 00:09:10.896 14:16:52 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:10.896 14:16:52 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:10.896 14:16:52 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:10.896 14:16:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.896 14:16:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.896 ************************************ 00:09:10.896 START TEST nvme_multi_aen 00:09:10.896 ************************************ 00:09:10.896 14:16:52 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:10.896 [2024-11-29 14:16:52.654219] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:09:10.896 [2024-11-29 14:16:52.654289] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:09:10.896 [2024-11-29 14:16:52.654298] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:09:10.896 [2024-11-29 14:16:52.655475] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:09:10.896 [2024-11-29 14:16:52.655512] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:09:10.896 [2024-11-29 14:16:52.655520] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:09:10.896 [2024-11-29 14:16:52.656571] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:09:10.896 [2024-11-29 14:16:52.656598] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:09:10.896 [2024-11-29 14:16:52.656606] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:09:10.896 [2024-11-29 14:16:52.657594] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:09:10.896 [2024-11-29 14:16:52.657622] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:09:10.896 [2024-11-29 14:16:52.657629] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75448) is not found. Dropping the request. 00:09:10.896 Child process pid: 75968 00:09:11.155 [Child] Asynchronous Event Request test 00:09:11.155 [Child] Attached to 0000:00:13.0 00:09:11.155 [Child] Attached to 0000:00:10.0 00:09:11.155 [Child] Attached to 0000:00:11.0 00:09:11.155 [Child] Attached to 0000:00:12.0 00:09:11.155 [Child] Registering asynchronous event callbacks... 00:09:11.155 [Child] Getting orig temperature thresholds of all controllers 00:09:11.155 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.155 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.155 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.155 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.155 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:11.155 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.155 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.155 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.155 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.155 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.155 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.155 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.155 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.155 [Child] Cleaning up... 00:09:11.155 Asynchronous Event Request test 00:09:11.155 Attached to 0000:00:13.0 00:09:11.155 Attached to 0000:00:10.0 00:09:11.155 Attached to 0000:00:11.0 00:09:11.155 Attached to 0000:00:12.0 00:09:11.155 Reset controller to setup AER completions for this process 00:09:11.155 Registering asynchronous event callbacks... 00:09:11.155 Getting orig temperature thresholds of all controllers 00:09:11.155 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.155 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.155 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.155 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.155 Setting all controllers temperature threshold low to trigger AER 00:09:11.155 Waiting for all controllers temperature threshold to be set lower 00:09:11.155 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.155 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:11.155 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.155 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:11.155 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.155 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:11.155 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.155 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:11.155 Waiting for all controllers to trigger AER and reset threshold 00:09:11.155 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.155 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.155 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.156 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.156 Cleaning up... 00:09:11.156 00:09:11.156 real 0m0.366s 00:09:11.156 user 0m0.101s 00:09:11.156 sys 0m0.171s 00:09:11.156 14:16:52 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.156 ************************************ 00:09:11.156 END TEST nvme_multi_aen 00:09:11.156 ************************************ 00:09:11.156 14:16:52 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:11.156 14:16:52 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:11.156 14:16:52 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:11.156 14:16:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.156 14:16:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:11.156 ************************************ 00:09:11.156 START TEST nvme_startup 00:09:11.156 ************************************ 00:09:11.156 14:16:52 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:11.415 Initializing NVMe Controllers 00:09:11.415 Attached to 0000:00:13.0 00:09:11.415 Attached to 0000:00:10.0 00:09:11.415 Attached to 0000:00:11.0 00:09:11.415 Attached to 0000:00:12.0 00:09:11.415 Initialization complete. 00:09:11.415 Time used:115914.711 (us). 00:09:11.415 00:09:11.415 real 0m0.169s 00:09:11.415 user 0m0.043s 00:09:11.415 sys 0m0.087s 00:09:11.415 14:16:53 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.415 14:16:53 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:11.415 ************************************ 00:09:11.415 END TEST nvme_startup 00:09:11.415 ************************************ 00:09:11.415 14:16:53 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:11.415 14:16:53 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.415 14:16:53 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.415 14:16:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:11.415 ************************************ 00:09:11.415 START TEST nvme_multi_secondary 00:09:11.415 ************************************ 00:09:11.415 14:16:53 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:09:11.415 14:16:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=76019 00:09:11.415 14:16:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=76020 00:09:11.415 14:16:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:11.415 14:16:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:11.415 14:16:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:14.721 Initializing NVMe Controllers 00:09:14.721 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:14.721 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:14.721 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:14.721 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:14.721 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:14.721 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:14.721 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:14.721 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:14.721 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:14.721 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:14.721 Initialization complete. Launching workers. 00:09:14.721 ======================================================== 00:09:14.721 Latency(us) 00:09:14.721 Device Information : IOPS MiB/s Average min max 00:09:14.721 PCIE (0000:00:13.0) NSID 1 from core 1: 7741.53 30.24 2066.35 764.98 8742.33 00:09:14.721 PCIE (0000:00:10.0) NSID 1 from core 1: 7741.53 30.24 2065.35 756.67 9340.53 00:09:14.721 PCIE (0000:00:11.0) NSID 1 from core 1: 7739.86 30.23 2066.73 763.58 9233.37 00:09:14.721 PCIE (0000:00:12.0) NSID 1 from core 1: 7740.53 30.24 2066.63 749.59 9535.91 00:09:14.721 PCIE (0000:00:12.0) NSID 2 from core 1: 7741.53 30.24 2066.35 767.37 9279.67 00:09:14.721 PCIE (0000:00:12.0) NSID 3 from core 1: 7746.86 30.26 2064.94 781.48 8673.42 00:09:14.721 ======================================================== 00:09:14.721 Total : 46451.83 181.45 2066.06 749.59 9535.91 00:09:14.721 00:09:14.721 Initializing NVMe Controllers 00:09:14.721 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:14.721 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:14.721 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:14.721 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:14.721 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:14.721 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:14.721 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:14.721 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:14.721 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:14.721 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:14.721 Initialization complete. Launching workers. 00:09:14.721 ======================================================== 00:09:14.721 Latency(us) 00:09:14.721 Device Information : IOPS MiB/s Average min max 00:09:14.721 PCIE (0000:00:13.0) NSID 1 from core 2: 3178.79 12.42 5032.49 938.40 13865.11 00:09:14.721 PCIE (0000:00:10.0) NSID 1 from core 2: 3178.79 12.42 5032.27 911.90 13536.52 00:09:14.721 PCIE (0000:00:11.0) NSID 1 from core 2: 3178.79 12.42 5033.09 932.22 12893.70 00:09:14.721 PCIE (0000:00:12.0) NSID 1 from core 2: 3178.79 12.42 5033.04 915.47 16101.68 00:09:14.721 PCIE (0000:00:12.0) NSID 2 from core 2: 3178.79 12.42 5033.00 1232.87 15729.30 00:09:14.721 PCIE (0000:00:12.0) NSID 3 from core 2: 3178.79 12.42 5033.13 941.16 14065.41 00:09:14.721 ======================================================== 00:09:14.721 Total : 19072.72 74.50 5032.84 911.90 16101.68 00:09:14.721 00:09:14.721 14:16:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 76019 00:09:16.658 Initializing NVMe Controllers 00:09:16.658 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:16.658 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:16.658 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:16.658 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:16.658 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:16.658 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:16.658 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:16.658 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:16.658 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:16.658 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:16.658 Initialization complete. Launching workers. 00:09:16.658 ======================================================== 00:09:16.658 Latency(us) 00:09:16.658 Device Information : IOPS MiB/s Average min max 00:09:16.658 PCIE (0000:00:13.0) NSID 1 from core 0: 10378.45 40.54 1541.30 693.67 10781.23 00:09:16.658 PCIE (0000:00:10.0) NSID 1 from core 0: 10375.65 40.53 1540.86 672.61 10486.13 00:09:16.658 PCIE (0000:00:11.0) NSID 1 from core 0: 10367.85 40.50 1542.84 673.84 9006.19 00:09:16.658 PCIE (0000:00:12.0) NSID 1 from core 0: 10364.25 40.49 1543.35 524.96 10013.57 00:09:16.658 PCIE (0000:00:12.0) NSID 2 from core 0: 10376.65 40.53 1541.49 457.96 10763.50 00:09:16.658 PCIE (0000:00:12.0) NSID 3 from core 0: 10377.05 40.54 1541.42 376.35 10346.45 00:09:16.658 ======================================================== 00:09:16.658 Total : 62239.88 243.12 1541.88 376.35 10781.23 00:09:16.658 00:09:16.658 14:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 76020 00:09:16.658 14:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=76089 00:09:16.658 14:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:16.658 14:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=76090 00:09:16.658 14:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:16.658 14:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:19.959 Initializing NVMe Controllers 00:09:19.959 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:19.959 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:19.959 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:19.959 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:19.959 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:19.959 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:19.959 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:19.959 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:19.959 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:19.959 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:19.959 Initialization complete. Launching workers. 00:09:19.959 ======================================================== 00:09:19.959 Latency(us) 00:09:19.959 Device Information : IOPS MiB/s Average min max 00:09:19.959 PCIE (0000:00:13.0) NSID 1 from core 0: 8170.61 31.92 1957.84 715.89 5568.54 00:09:19.959 PCIE (0000:00:10.0) NSID 1 from core 0: 8170.61 31.92 1957.00 714.14 6629.97 00:09:19.959 PCIE (0000:00:11.0) NSID 1 from core 0: 8170.61 31.92 1957.98 735.76 6686.31 00:09:19.959 PCIE (0000:00:12.0) NSID 1 from core 0: 8170.61 31.92 1958.03 733.30 6426.89 00:09:19.959 PCIE (0000:00:12.0) NSID 2 from core 0: 8170.61 31.92 1958.01 736.76 5618.98 00:09:19.959 PCIE (0000:00:12.0) NSID 3 from core 0: 8170.61 31.92 1958.05 718.96 5498.36 00:09:19.959 ======================================================== 00:09:19.959 Total : 49023.64 191.50 1957.82 714.14 6686.31 00:09:19.959 00:09:19.959 Initializing NVMe Controllers 00:09:19.959 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:19.959 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:19.959 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:19.959 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:19.959 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:19.960 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:19.960 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:19.960 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:19.960 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:19.960 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:19.960 Initialization complete. Launching workers. 00:09:19.960 ======================================================== 00:09:19.960 Latency(us) 00:09:19.960 Device Information : IOPS MiB/s Average min max 00:09:19.960 PCIE (0000:00:13.0) NSID 1 from core 1: 7967.33 31.12 2007.74 721.18 7828.39 00:09:19.960 PCIE (0000:00:10.0) NSID 1 from core 1: 7967.33 31.12 2006.89 702.09 6583.64 00:09:19.960 PCIE (0000:00:11.0) NSID 1 from core 1: 7967.33 31.12 2008.03 723.18 7508.07 00:09:19.960 PCIE (0000:00:12.0) NSID 1 from core 1: 7967.33 31.12 2008.03 742.47 7626.86 00:09:19.960 PCIE (0000:00:12.0) NSID 2 from core 1: 7967.33 31.12 2008.09 733.10 7370.47 00:09:19.960 PCIE (0000:00:12.0) NSID 3 from core 1: 7967.33 31.12 2008.07 709.83 7785.03 00:09:19.960 ======================================================== 00:09:19.960 Total : 47803.95 186.73 2007.81 702.09 7828.39 00:09:19.960 00:09:22.500 Initializing NVMe Controllers 00:09:22.500 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:22.500 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:22.500 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:22.500 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:22.500 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:22.500 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:22.500 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:22.500 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:22.500 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:22.500 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:22.500 Initialization complete. Launching workers. 00:09:22.500 ======================================================== 00:09:22.500 Latency(us) 00:09:22.500 Device Information : IOPS MiB/s Average min max 00:09:22.500 PCIE (0000:00:13.0) NSID 1 from core 2: 4668.02 18.23 3426.85 760.20 12555.90 00:09:22.500 PCIE (0000:00:10.0) NSID 1 from core 2: 4668.02 18.23 3425.84 740.89 12191.67 00:09:22.500 PCIE (0000:00:11.0) NSID 1 from core 2: 4668.02 18.23 3426.75 740.91 11949.04 00:09:22.500 PCIE (0000:00:12.0) NSID 1 from core 2: 4668.02 18.23 3427.02 659.70 12422.89 00:09:22.500 PCIE (0000:00:12.0) NSID 2 from core 2: 4668.02 18.23 3426.76 559.34 12351.98 00:09:22.500 PCIE (0000:00:12.0) NSID 3 from core 2: 4668.02 18.23 3426.17 458.79 12350.84 00:09:22.500 ======================================================== 00:09:22.500 Total : 28008.15 109.41 3426.57 458.79 12555.90 00:09:22.500 00:09:22.500 ************************************ 00:09:22.500 END TEST nvme_multi_secondary 00:09:22.500 ************************************ 00:09:22.500 14:17:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 76089 00:09:22.500 14:17:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 76090 00:09:22.500 00:09:22.500 real 0m10.645s 00:09:22.500 user 0m18.280s 00:09:22.500 sys 0m0.598s 00:09:22.500 14:17:03 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.500 14:17:03 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:22.500 14:17:03 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:22.500 14:17:03 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:22.500 14:17:03 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/75057 ]] 00:09:22.500 14:17:03 nvme -- common/autotest_common.sh@1090 -- # kill 75057 00:09:22.500 14:17:03 nvme -- common/autotest_common.sh@1091 -- # wait 75057 00:09:22.500 [2024-11-29 14:17:03.801372] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.500 [2024-11-29 14:17:03.801666] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.500 [2024-11-29 14:17:03.801698] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.500 [2024-11-29 14:17:03.801722] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.500 [2024-11-29 14:17:03.802413] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.500 [2024-11-29 14:17:03.802470] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.500 [2024-11-29 14:17:03.802508] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.500 [2024-11-29 14:17:03.802534] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.500 [2024-11-29 14:17:03.803249] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.500 [2024-11-29 14:17:03.803311] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.500 [2024-11-29 14:17:03.803333] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.500 [2024-11-29 14:17:03.803360] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.500 [2024-11-29 14:17:03.804224] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.501 [2024-11-29 14:17:03.804429] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.501 [2024-11-29 14:17:03.804454] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.501 [2024-11-29 14:17:03.804475] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75967) is not found. Dropping the request. 00:09:22.501 14:17:03 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:22.501 14:17:03 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:22.501 14:17:03 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:22.501 14:17:03 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.501 14:17:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.501 14:17:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.501 ************************************ 00:09:22.501 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:22.501 ************************************ 00:09:22.501 14:17:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:22.501 * Looking for test storage... 00:09:22.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:22.501 14:17:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:22.501 14:17:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:22.501 14:17:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:22.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.501 --rc genhtml_branch_coverage=1 00:09:22.501 --rc genhtml_function_coverage=1 00:09:22.501 --rc genhtml_legend=1 00:09:22.501 --rc geninfo_all_blocks=1 00:09:22.501 --rc geninfo_unexecuted_blocks=1 00:09:22.501 00:09:22.501 ' 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:22.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.501 --rc genhtml_branch_coverage=1 00:09:22.501 --rc genhtml_function_coverage=1 00:09:22.501 --rc genhtml_legend=1 00:09:22.501 --rc geninfo_all_blocks=1 00:09:22.501 --rc geninfo_unexecuted_blocks=1 00:09:22.501 00:09:22.501 ' 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:22.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.501 --rc genhtml_branch_coverage=1 00:09:22.501 --rc genhtml_function_coverage=1 00:09:22.501 --rc genhtml_legend=1 00:09:22.501 --rc geninfo_all_blocks=1 00:09:22.501 --rc geninfo_unexecuted_blocks=1 00:09:22.501 00:09:22.501 ' 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:22.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.501 --rc genhtml_branch_coverage=1 00:09:22.501 --rc genhtml_function_coverage=1 00:09:22.501 --rc genhtml_legend=1 00:09:22.501 --rc geninfo_all_blocks=1 00:09:22.501 --rc geninfo_unexecuted_blocks=1 00:09:22.501 00:09:22.501 ' 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:22.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76257 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76257 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 76257 ']' 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.501 14:17:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:22.501 [2024-11-29 14:17:04.223789] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:22.501 [2024-11-29 14:17:04.224199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76257 ] 00:09:22.762 [2024-11-29 14:17:04.391403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.762 [2024-11-29 14:17:04.430300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.762 [2024-11-29 14:17:04.430719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.762 [2024-11-29 14:17:04.431062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.762 [2024-11-29 14:17:04.431149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.334 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.334 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:23.335 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:23.335 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.335 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:23.335 nvme0n1 00:09:23.335 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.335 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:23.335 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_rdGIi.txt 00:09:23.335 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:23.335 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.335 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:23.335 true 00:09:23.335 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.594 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:23.594 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732889825 00:09:23.594 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76280 00:09:23.594 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:23.594 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:23.594 14:17:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:25.508 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:25.508 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.508 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:25.508 [2024-11-29 14:17:07.136112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:25.508 [2024-11-29 14:17:07.136424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:25.508 [2024-11-29 14:17:07.136449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:25.508 [2024-11-29 14:17:07.136481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:25.508 [2024-11-29 14:17:07.138518] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:25.508 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76280 00:09:25.508 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.508 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76280 00:09:25.508 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76280 00:09:25.508 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:25.508 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:25.508 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:25.508 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.508 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:25.508 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_rdGIi.txt 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_rdGIi.txt 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76257 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 76257 ']' 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 76257 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76257 00:09:25.509 killing process with pid 76257 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76257' 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 76257 00:09:25.509 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 76257 00:09:26.080 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:26.081 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:26.081 ************************************ 00:09:26.081 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:26.081 ************************************ 00:09:26.081 00:09:26.081 real 0m3.690s 00:09:26.081 user 0m12.861s 00:09:26.081 sys 0m0.560s 00:09:26.081 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.081 14:17:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:26.081 14:17:07 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:26.081 14:17:07 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:26.081 14:17:07 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:26.081 14:17:07 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.081 14:17:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.081 ************************************ 00:09:26.081 START TEST nvme_fio 00:09:26.081 ************************************ 00:09:26.081 14:17:07 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:26.081 14:17:07 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:26.081 14:17:07 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:26.081 14:17:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:26.081 14:17:07 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:26.081 14:17:07 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:26.081 14:17:07 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:26.081 14:17:07 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:26.081 14:17:07 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:26.081 14:17:07 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:26.081 14:17:07 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:26.081 14:17:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:26.081 14:17:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:26.081 14:17:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:26.081 14:17:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:26.081 14:17:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:26.342 14:17:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:26.342 14:17:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:26.602 14:17:08 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:26.602 14:17:08 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:26.602 14:17:08 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:26.602 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:26.602 fio-3.35 00:09:26.602 Starting 1 thread 00:09:33.182 00:09:33.182 test: (groupid=0, jobs=1): err= 0: pid=76403: Fri Nov 29 14:17:14 2024 00:09:33.182 read: IOPS=22.5k, BW=87.8MiB/s (92.0MB/s)(176MiB/2001msec) 00:09:33.182 slat (nsec): min=3332, max=82623, avg=5016.05, stdev=2480.77 00:09:33.182 clat (usec): min=191, max=10503, avg=2837.70, stdev=920.28 00:09:33.182 lat (usec): min=196, max=10559, avg=2842.71, stdev=921.62 00:09:33.182 clat percentiles (usec): 00:09:33.182 | 1.00th=[ 1401], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2376], 00:09:33.182 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2638], 00:09:33.182 | 70.00th=[ 2769], 80.00th=[ 3032], 90.00th=[ 4015], 95.00th=[ 4948], 00:09:33.182 | 99.00th=[ 6325], 99.50th=[ 6980], 99.90th=[ 7898], 99.95th=[ 8455], 00:09:33.182 | 99.99th=[ 9372] 00:09:33.182 bw ( KiB/s): min=83952, max=91744, per=98.20%, avg=88274.67, stdev=3965.47, samples=3 00:09:33.182 iops : min=20988, max=22936, avg=22068.67, stdev=991.37, samples=3 00:09:33.182 write: IOPS=22.3k, BW=87.3MiB/s (91.5MB/s)(175MiB/2001msec); 0 zone resets 00:09:33.182 slat (nsec): min=3412, max=88924, avg=5226.32, stdev=2436.37 00:09:33.182 clat (usec): min=238, max=9383, avg=2855.64, stdev=935.24 00:09:33.182 lat (usec): min=242, max=9397, avg=2860.87, stdev=936.54 00:09:33.182 clat percentiles (usec): 00:09:33.182 | 1.00th=[ 1434], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2376], 00:09:33.182 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2638], 00:09:33.182 | 70.00th=[ 2769], 80.00th=[ 3032], 90.00th=[ 4047], 95.00th=[ 5014], 00:09:33.182 | 99.00th=[ 6390], 99.50th=[ 7111], 99.90th=[ 8094], 99.95th=[ 8586], 00:09:33.182 | 99.99th=[ 9241] 00:09:33.182 bw ( KiB/s): min=85312, max=91280, per=98.95%, avg=88416.00, stdev=2991.23, samples=3 00:09:33.182 iops : min=21328, max=22820, avg=22104.00, stdev=747.81, samples=3 00:09:33.182 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.13% 00:09:33.182 lat (msec) : 2=3.73%, 4=85.96%, 10=10.16%, 20=0.01% 00:09:33.182 cpu : usr=99.20%, sys=0.00%, ctx=5, majf=0, minf=627 00:09:33.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:33.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.182 issued rwts: total=44967,44700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.182 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.182 00:09:33.182 Run status group 0 (all jobs): 00:09:33.182 READ: bw=87.8MiB/s (92.0MB/s), 87.8MiB/s-87.8MiB/s (92.0MB/s-92.0MB/s), io=176MiB (184MB), run=2001-2001msec 00:09:33.182 WRITE: bw=87.3MiB/s (91.5MB/s), 87.3MiB/s-87.3MiB/s (91.5MB/s-91.5MB/s), io=175MiB (183MB), run=2001-2001msec 00:09:33.182 ----------------------------------------------------- 00:09:33.182 Suppressions used: 00:09:33.182 count bytes template 00:09:33.182 1 32 /usr/src/fio/parse.c 00:09:33.182 1 8 libtcmalloc_minimal.so 00:09:33.182 ----------------------------------------------------- 00:09:33.182 00:09:33.182 14:17:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:33.182 14:17:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:33.182 14:17:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:33.182 14:17:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:33.182 14:17:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:33.182 14:17:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:33.451 14:17:15 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:33.451 14:17:15 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:33.452 14:17:15 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:33.452 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:33.452 fio-3.35 00:09:33.452 Starting 1 thread 00:09:40.038 00:09:40.038 test: (groupid=0, jobs=1): err= 0: pid=76458: Fri Nov 29 14:17:20 2024 00:09:40.038 read: IOPS=20.0k, BW=78.2MiB/s (82.0MB/s)(157MiB/2001msec) 00:09:40.038 slat (nsec): min=4237, max=98287, avg=5321.79, stdev=2591.95 00:09:40.038 clat (usec): min=309, max=10495, avg=3178.52, stdev=1008.31 00:09:40.038 lat (usec): min=314, max=10535, avg=3183.84, stdev=1009.25 00:09:40.038 clat percentiles (usec): 00:09:40.038 | 1.00th=[ 1893], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:40.038 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 2966], 00:09:40.038 | 70.00th=[ 3195], 80.00th=[ 3720], 90.00th=[ 4686], 95.00th=[ 5407], 00:09:40.038 | 99.00th=[ 6652], 99.50th=[ 7177], 99.90th=[ 8225], 99.95th=[ 8979], 00:09:40.038 | 99.99th=[10290] 00:09:40.038 bw ( KiB/s): min=77936, max=83584, per=100.00%, avg=80552.00, stdev=2846.89, samples=3 00:09:40.038 iops : min=19484, max=20896, avg=20138.00, stdev=711.72, samples=3 00:09:40.038 write: IOPS=20.0k, BW=78.0MiB/s (81.8MB/s)(156MiB/2001msec); 0 zone resets 00:09:40.038 slat (nsec): min=4294, max=76972, avg=5400.77, stdev=2487.83 00:09:40.038 clat (usec): min=496, max=10410, avg=3200.17, stdev=1004.41 00:09:40.038 lat (usec): min=501, max=10422, avg=3205.57, stdev=1005.32 00:09:40.038 clat percentiles (usec): 00:09:40.038 | 1.00th=[ 1926], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:40.038 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2966], 00:09:40.038 | 70.00th=[ 3228], 80.00th=[ 3720], 90.00th=[ 4686], 95.00th=[ 5407], 00:09:40.038 | 99.00th=[ 6652], 99.50th=[ 7177], 99.90th=[ 8225], 99.95th=[ 9241], 00:09:40.038 | 99.99th=[10159] 00:09:40.038 bw ( KiB/s): min=78000, max=83560, per=100.00%, avg=80610.67, stdev=2795.43, samples=3 00:09:40.038 iops : min=19500, max=20890, avg=20152.67, stdev=698.86, samples=3 00:09:40.038 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:09:40.038 lat (msec) : 2=1.22%, 4=81.68%, 10=17.04%, 20=0.01% 00:09:40.038 cpu : usr=99.05%, sys=0.00%, ctx=4, majf=0, minf=626 00:09:40.038 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:40.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:40.038 issued rwts: total=40068,39979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.038 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:40.038 00:09:40.038 Run status group 0 (all jobs): 00:09:40.038 READ: bw=78.2MiB/s (82.0MB/s), 78.2MiB/s-78.2MiB/s (82.0MB/s-82.0MB/s), io=157MiB (164MB), run=2001-2001msec 00:09:40.038 WRITE: bw=78.0MiB/s (81.8MB/s), 78.0MiB/s-78.0MiB/s (81.8MB/s-81.8MB/s), io=156MiB (164MB), run=2001-2001msec 00:09:40.038 ----------------------------------------------------- 00:09:40.038 Suppressions used: 00:09:40.038 count bytes template 00:09:40.038 1 32 /usr/src/fio/parse.c 00:09:40.038 1 8 libtcmalloc_minimal.so 00:09:40.038 ----------------------------------------------------- 00:09:40.038 00:09:40.038 14:17:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:40.038 14:17:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:40.038 14:17:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:40.038 14:17:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:40.038 14:17:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:40.038 14:17:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:40.038 14:17:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:40.038 14:17:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:40.038 14:17:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:40.038 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:40.038 fio-3.35 00:09:40.038 Starting 1 thread 00:09:45.335 00:09:45.335 test: (groupid=0, jobs=1): err= 0: pid=76514: Fri Nov 29 14:17:26 2024 00:09:45.335 read: IOPS=14.0k, BW=54.8MiB/s (57.4MB/s)(111MiB/2029msec) 00:09:45.335 slat (nsec): min=4877, max=78402, avg=6460.34, stdev=3438.68 00:09:45.335 clat (usec): min=1192, max=35826, avg=4014.61, stdev=1711.77 00:09:45.335 lat (usec): min=1198, max=35831, avg=4021.07, stdev=1712.63 00:09:45.335 clat percentiles (usec): 00:09:45.335 | 1.00th=[ 2147], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 2966], 00:09:45.335 | 30.00th=[ 3097], 40.00th=[ 3228], 50.00th=[ 3392], 60.00th=[ 3654], 00:09:45.335 | 70.00th=[ 4146], 80.00th=[ 4948], 90.00th=[ 6259], 95.00th=[ 7046], 00:09:45.335 | 99.00th=[ 9634], 99.50th=[10683], 99.90th=[12780], 99.95th=[32113], 00:09:45.335 | 99.99th=[35914] 00:09:45.335 bw ( KiB/s): min=24904, max=70536, per=100.00%, avg=56868.00, stdev=21450.03, samples=4 00:09:45.335 iops : min= 6226, max=17634, avg=14217.00, stdev=5362.51, samples=4 00:09:45.335 write: IOPS=14.0k, BW=54.8MiB/s (57.5MB/s)(111MiB/2029msec); 0 zone resets 00:09:45.335 slat (nsec): min=5054, max=82847, avg=6611.82, stdev=3286.35 00:09:45.335 clat (usec): min=1238, max=70051, avg=5080.50, stdev=6865.01 00:09:45.335 lat (usec): min=1244, max=70056, avg=5087.11, stdev=6865.19 00:09:45.335 clat percentiles (usec): 00:09:45.335 | 1.00th=[ 2147], 5.00th=[ 2737], 10.00th=[ 2868], 20.00th=[ 2999], 00:09:45.335 | 30.00th=[ 3130], 40.00th=[ 3261], 50.00th=[ 3425], 60.00th=[ 3687], 00:09:45.335 | 70.00th=[ 4228], 80.00th=[ 5080], 90.00th=[ 6456], 95.00th=[ 8094], 00:09:45.335 | 99.00th=[46400], 99.50th=[49546], 99.90th=[55837], 99.95th=[59507], 00:09:45.335 | 99.99th=[68682] 00:09:45.335 bw ( KiB/s): min=24800, max=70536, per=100.00%, avg=56766.00, stdev=21463.96, samples=4 00:09:45.335 iops : min= 6200, max=17634, avg=14191.50, stdev=5365.99, samples=4 00:09:45.335 lat (msec) : 2=0.74%, 4=66.49%, 10=30.67%, 20=0.71%, 50=1.17% 00:09:45.335 lat (msec) : 100=0.22% 00:09:45.335 cpu : usr=98.77%, sys=0.15%, ctx=13, majf=0, minf=626 00:09:45.335 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:45.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.335 issued rwts: total=28454,28490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.335 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.335 00:09:45.335 Run status group 0 (all jobs): 00:09:45.335 READ: bw=54.8MiB/s (57.4MB/s), 54.8MiB/s-54.8MiB/s (57.4MB/s-57.4MB/s), io=111MiB (117MB), run=2029-2029msec 00:09:45.335 WRITE: bw=54.8MiB/s (57.5MB/s), 54.8MiB/s-54.8MiB/s (57.5MB/s-57.5MB/s), io=111MiB (117MB), run=2029-2029msec 00:09:45.335 ----------------------------------------------------- 00:09:45.335 Suppressions used: 00:09:45.335 count bytes template 00:09:45.335 1 32 /usr/src/fio/parse.c 00:09:45.335 1 8 libtcmalloc_minimal.so 00:09:45.335 ----------------------------------------------------- 00:09:45.335 00:09:45.335 14:17:26 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:45.335 14:17:26 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:45.335 14:17:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:45.335 14:17:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:45.335 14:17:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:45.335 14:17:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:45.335 14:17:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:45.335 14:17:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:45.335 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:45.335 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:45.335 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:45.335 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:45.335 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:45.335 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:45.335 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:45.335 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:45.335 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:45.335 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:45.335 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:45.624 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:45.624 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:45.624 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:45.624 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:45.624 14:17:27 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:45.624 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:45.624 fio-3.35 00:09:45.624 Starting 1 thread 00:09:50.912 00:09:50.912 test: (groupid=0, jobs=1): err= 0: pid=76575: Fri Nov 29 14:17:31 2024 00:09:50.912 read: IOPS=17.5k, BW=68.4MiB/s (71.7MB/s)(137MiB/2001msec) 00:09:50.912 slat (nsec): min=4839, max=89284, avg=6306.21, stdev=3260.73 00:09:50.912 clat (usec): min=529, max=9831, avg=3630.57, stdev=1125.75 00:09:50.912 lat (usec): min=535, max=9871, avg=3636.87, stdev=1126.92 00:09:50.912 clat percentiles (usec): 00:09:50.912 | 1.00th=[ 1991], 5.00th=[ 2606], 10.00th=[ 2737], 20.00th=[ 2868], 00:09:50.912 | 30.00th=[ 2966], 40.00th=[ 3064], 50.00th=[ 3195], 60.00th=[ 3392], 00:09:50.912 | 70.00th=[ 3785], 80.00th=[ 4424], 90.00th=[ 5276], 95.00th=[ 6128], 00:09:50.912 | 99.00th=[ 7373], 99.50th=[ 7701], 99.90th=[ 8717], 99.95th=[ 9110], 00:09:50.912 | 99.99th=[ 9765] 00:09:50.912 bw ( KiB/s): min=65968, max=73288, per=99.19%, avg=69496.00, stdev=3667.13, samples=3 00:09:50.912 iops : min=16492, max=18322, avg=17374.00, stdev=916.78, samples=3 00:09:50.912 write: IOPS=17.5k, BW=68.5MiB/s (71.8MB/s)(137MiB/2001msec); 0 zone resets 00:09:50.912 slat (usec): min=4, max=141, avg= 6.44, stdev= 3.34 00:09:50.912 clat (usec): min=261, max=9759, avg=3650.15, stdev=1128.47 00:09:50.912 lat (usec): min=267, max=9772, avg=3656.59, stdev=1129.62 00:09:50.912 clat percentiles (usec): 00:09:50.912 | 1.00th=[ 1958], 5.00th=[ 2606], 10.00th=[ 2737], 20.00th=[ 2868], 00:09:50.912 | 30.00th=[ 2999], 40.00th=[ 3097], 50.00th=[ 3195], 60.00th=[ 3392], 00:09:50.912 | 70.00th=[ 3818], 80.00th=[ 4424], 90.00th=[ 5276], 95.00th=[ 6128], 00:09:50.912 | 99.00th=[ 7373], 99.50th=[ 7701], 99.90th=[ 8586], 99.95th=[ 9110], 00:09:50.912 | 99.99th=[ 9634] 00:09:50.912 bw ( KiB/s): min=66312, max=73344, per=99.06%, avg=69480.00, stdev=3567.29, samples=3 00:09:50.912 iops : min=16578, max=18336, avg=17370.00, stdev=891.82, samples=3 00:09:50.912 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.01% 00:09:50.912 lat (msec) : 2=1.00%, 4=72.71%, 10=26.25% 00:09:50.912 cpu : usr=98.90%, sys=0.00%, ctx=5, majf=0, minf=624 00:09:50.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:50.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.912 issued rwts: total=35048,35088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.912 00:09:50.912 Run status group 0 (all jobs): 00:09:50.912 READ: bw=68.4MiB/s (71.7MB/s), 68.4MiB/s-68.4MiB/s (71.7MB/s-71.7MB/s), io=137MiB (144MB), run=2001-2001msec 00:09:50.912 WRITE: bw=68.5MiB/s (71.8MB/s), 68.5MiB/s-68.5MiB/s (71.8MB/s-71.8MB/s), io=137MiB (144MB), run=2001-2001msec 00:09:50.912 ----------------------------------------------------- 00:09:50.912 Suppressions used: 00:09:50.912 count bytes template 00:09:50.912 1 32 /usr/src/fio/parse.c 00:09:50.912 1 8 libtcmalloc_minimal.so 00:09:50.912 ----------------------------------------------------- 00:09:50.912 00:09:50.912 ************************************ 00:09:50.912 END TEST nvme_fio 00:09:50.912 ************************************ 00:09:50.912 14:17:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:50.912 14:17:32 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:50.912 00:09:50.912 real 0m24.361s 00:09:50.912 user 0m15.853s 00:09:50.912 sys 0m14.737s 00:09:50.912 14:17:32 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.912 14:17:32 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:50.912 ************************************ 00:09:50.912 END TEST nvme 00:09:50.912 ************************************ 00:09:50.912 00:09:50.912 real 1m31.737s 00:09:50.912 user 3m30.482s 00:09:50.912 sys 0m25.081s 00:09:50.912 14:17:32 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.912 14:17:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.912 14:17:32 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:50.912 14:17:32 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:50.912 14:17:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:50.912 14:17:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.912 14:17:32 -- common/autotest_common.sh@10 -- # set +x 00:09:50.912 ************************************ 00:09:50.912 START TEST nvme_scc 00:09:50.912 ************************************ 00:09:50.912 14:17:32 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:50.912 * Looking for test storage... 00:09:50.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:50.912 14:17:32 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:50.912 14:17:32 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:50.912 14:17:32 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:50.912 14:17:32 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.912 14:17:32 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:50.912 14:17:32 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.912 14:17:32 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:50.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.912 --rc genhtml_branch_coverage=1 00:09:50.912 --rc genhtml_function_coverage=1 00:09:50.912 --rc genhtml_legend=1 00:09:50.912 --rc geninfo_all_blocks=1 00:09:50.912 --rc geninfo_unexecuted_blocks=1 00:09:50.912 00:09:50.912 ' 00:09:50.912 14:17:32 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:50.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.912 --rc genhtml_branch_coverage=1 00:09:50.912 --rc genhtml_function_coverage=1 00:09:50.912 --rc genhtml_legend=1 00:09:50.912 --rc geninfo_all_blocks=1 00:09:50.912 --rc geninfo_unexecuted_blocks=1 00:09:50.912 00:09:50.912 ' 00:09:50.912 14:17:32 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:50.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.912 --rc genhtml_branch_coverage=1 00:09:50.912 --rc genhtml_function_coverage=1 00:09:50.912 --rc genhtml_legend=1 00:09:50.912 --rc geninfo_all_blocks=1 00:09:50.912 --rc geninfo_unexecuted_blocks=1 00:09:50.912 00:09:50.912 ' 00:09:50.912 14:17:32 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:50.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.912 --rc genhtml_branch_coverage=1 00:09:50.913 --rc genhtml_function_coverage=1 00:09:50.913 --rc genhtml_legend=1 00:09:50.913 --rc geninfo_all_blocks=1 00:09:50.913 --rc geninfo_unexecuted_blocks=1 00:09:50.913 00:09:50.913 ' 00:09:50.913 14:17:32 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:50.913 14:17:32 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:50.913 14:17:32 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:50.913 14:17:32 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:50.913 14:17:32 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.913 14:17:32 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.913 14:17:32 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.913 14:17:32 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.913 14:17:32 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.913 14:17:32 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.913 14:17:32 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.913 14:17:32 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.913 14:17:32 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:50.913 14:17:32 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.913 14:17:32 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:50.913 14:17:32 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:50.913 14:17:32 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:50.913 14:17:32 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:50.913 14:17:32 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:50.913 14:17:32 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:50.913 14:17:32 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:50.913 14:17:32 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:50.913 14:17:32 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:50.913 14:17:32 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.913 14:17:32 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:50.913 14:17:32 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:50.913 14:17:32 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:50.913 14:17:32 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:50.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:51.174 Waiting for block devices as requested 00:09:51.174 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.174 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.174 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.436 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:56.738 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:56.738 14:17:38 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:56.738 14:17:38 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:56.738 14:17:38 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:56.738 14:17:38 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.738 14:17:38 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.738 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.739 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:56.740 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.741 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:56.742 14:17:38 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:56.742 14:17:38 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:56.742 14:17:38 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:56.743 14:17:38 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.743 14:17:38 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.743 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:56.744 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.745 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:56.746 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:56.747 14:17:38 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:56.747 14:17:38 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:56.747 14:17:38 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.747 14:17:38 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:56.747 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.748 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:56.749 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:56.750 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.751 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.752 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:56.753 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.754 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:56.755 14:17:38 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:56.755 14:17:38 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:56.755 14:17:38 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.755 14:17:38 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.755 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.756 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.757 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:56.758 14:17:38 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:56.758 14:17:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:57.020 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:57.020 14:17:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:57.020 14:17:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:57.020 14:17:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:57.020 14:17:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:57.020 14:17:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:57.020 14:17:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:57.020 14:17:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:57.020 14:17:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:57.020 14:17:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:57.020 14:17:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:57.020 14:17:38 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:57.020 14:17:38 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:57.020 14:17:38 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:57.020 14:17:38 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:57.020 14:17:38 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:57.020 14:17:38 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:57.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.855 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.855 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.855 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.855 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.116 14:17:39 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:58.116 14:17:39 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:58.116 14:17:39 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:58.116 14:17:39 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:58.116 ************************************ 00:09:58.116 START TEST nvme_simple_copy 00:09:58.116 ************************************ 00:09:58.116 14:17:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:58.377 Initializing NVMe Controllers 00:09:58.377 Attaching to 0000:00:10.0 00:09:58.377 Controller supports SCC. Attached to 0000:00:10.0 00:09:58.377 Namespace ID: 1 size: 6GB 00:09:58.377 Initialization complete. 00:09:58.377 00:09:58.377 Controller QEMU NVMe Ctrl (12340 ) 00:09:58.377 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:58.377 Namespace Block Size:4096 00:09:58.377 Writing LBAs 0 to 63 with Random Data 00:09:58.377 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:58.377 LBAs matching Written Data: 64 00:09:58.377 00:09:58.377 real 0m0.263s 00:09:58.377 user 0m0.094s 00:09:58.377 sys 0m0.067s 00:09:58.377 14:17:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.377 ************************************ 00:09:58.377 END TEST nvme_simple_copy 00:09:58.377 ************************************ 00:09:58.377 14:17:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:58.377 ************************************ 00:09:58.377 END TEST nvme_scc 00:09:58.377 ************************************ 00:09:58.377 00:09:58.377 real 0m7.907s 00:09:58.377 user 0m1.005s 00:09:58.377 sys 0m1.443s 00:09:58.377 14:17:40 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.377 14:17:40 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:58.377 14:17:40 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:58.377 14:17:40 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:58.377 14:17:40 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:58.377 14:17:40 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:58.377 14:17:40 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:58.377 14:17:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:58.377 14:17:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:58.377 14:17:40 -- common/autotest_common.sh@10 -- # set +x 00:09:58.377 ************************************ 00:09:58.377 START TEST nvme_fdp 00:09:58.377 ************************************ 00:09:58.377 14:17:40 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:58.377 * Looking for test storage... 00:09:58.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:58.639 14:17:40 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:58.639 14:17:40 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:58.639 14:17:40 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:58.639 14:17:40 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:58.639 14:17:40 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.639 14:17:40 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.639 14:17:40 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.639 14:17:40 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.639 14:17:40 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.639 14:17:40 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.639 14:17:40 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:58.640 14:17:40 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.640 14:17:40 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:58.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.640 --rc genhtml_branch_coverage=1 00:09:58.640 --rc genhtml_function_coverage=1 00:09:58.640 --rc genhtml_legend=1 00:09:58.640 --rc geninfo_all_blocks=1 00:09:58.640 --rc geninfo_unexecuted_blocks=1 00:09:58.640 00:09:58.640 ' 00:09:58.640 14:17:40 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:58.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.640 --rc genhtml_branch_coverage=1 00:09:58.640 --rc genhtml_function_coverage=1 00:09:58.640 --rc genhtml_legend=1 00:09:58.640 --rc geninfo_all_blocks=1 00:09:58.640 --rc geninfo_unexecuted_blocks=1 00:09:58.640 00:09:58.640 ' 00:09:58.640 14:17:40 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:58.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.640 --rc genhtml_branch_coverage=1 00:09:58.640 --rc genhtml_function_coverage=1 00:09:58.640 --rc genhtml_legend=1 00:09:58.640 --rc geninfo_all_blocks=1 00:09:58.640 --rc geninfo_unexecuted_blocks=1 00:09:58.640 00:09:58.640 ' 00:09:58.640 14:17:40 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:58.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.640 --rc genhtml_branch_coverage=1 00:09:58.640 --rc genhtml_function_coverage=1 00:09:58.640 --rc genhtml_legend=1 00:09:58.640 --rc geninfo_all_blocks=1 00:09:58.640 --rc geninfo_unexecuted_blocks=1 00:09:58.640 00:09:58.640 ' 00:09:58.640 14:17:40 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:58.640 14:17:40 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:58.640 14:17:40 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:58.640 14:17:40 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:58.640 14:17:40 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.640 14:17:40 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.640 14:17:40 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.640 14:17:40 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.640 14:17:40 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.640 14:17:40 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:58.640 14:17:40 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.640 14:17:40 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:58.640 14:17:40 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:58.640 14:17:40 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:58.640 14:17:40 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:58.640 14:17:40 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:58.640 14:17:40 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:58.640 14:17:40 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:58.640 14:17:40 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:58.640 14:17:40 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:58.640 14:17:40 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:58.640 14:17:40 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:58.902 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:59.162 Waiting for block devices as requested 00:09:59.162 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.162 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.162 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.423 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:04.745 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:04.745 14:17:46 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:04.745 14:17:46 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:04.745 14:17:46 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:04.745 14:17:46 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:04.745 14:17:46 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.745 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.746 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.747 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:04.748 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.749 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:04.750 14:17:46 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:04.750 14:17:46 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:04.750 14:17:46 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:04.750 14:17:46 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.750 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.751 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:04.752 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:04.753 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:04.754 14:17:46 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:04.754 14:17:46 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:04.754 14:17:46 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:04.754 14:17:46 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.754 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.755 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.756 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.757 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.758 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:04.759 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:04.760 14:17:46 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.761 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:04.762 14:17:46 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:04.762 14:17:46 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:04.762 14:17:46 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:04.762 14:17:46 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:04.762 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.763 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:04.764 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:04.765 14:17:46 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:04.765 14:17:46 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:04.766 14:17:46 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:04.766 14:17:46 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:04.766 14:17:46 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:04.766 14:17:46 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:05.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:05.911 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.911 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.911 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.911 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.911 14:17:47 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:05.911 14:17:47 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:05.911 14:17:47 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.911 14:17:47 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:05.911 ************************************ 00:10:05.911 START TEST nvme_flexible_data_placement 00:10:05.911 ************************************ 00:10:05.911 14:17:47 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:06.174 Initializing NVMe Controllers 00:10:06.174 Attaching to 0000:00:13.0 00:10:06.174 Controller supports FDP Attached to 0000:00:13.0 00:10:06.174 Namespace ID: 1 Endurance Group ID: 1 00:10:06.174 Initialization complete. 00:10:06.174 00:10:06.174 ================================== 00:10:06.174 == FDP tests for Namespace: #01 == 00:10:06.174 ================================== 00:10:06.174 00:10:06.174 Get Feature: FDP: 00:10:06.174 ================= 00:10:06.174 Enabled: Yes 00:10:06.174 FDP configuration Index: 0 00:10:06.174 00:10:06.174 FDP configurations log page 00:10:06.174 =========================== 00:10:06.174 Number of FDP configurations: 1 00:10:06.174 Version: 0 00:10:06.174 Size: 112 00:10:06.174 FDP Configuration Descriptor: 0 00:10:06.174 Descriptor Size: 96 00:10:06.174 Reclaim Group Identifier format: 2 00:10:06.174 FDP Volatile Write Cache: Not Present 00:10:06.174 FDP Configuration: Valid 00:10:06.174 Vendor Specific Size: 0 00:10:06.174 Number of Reclaim Groups: 2 00:10:06.174 Number of Recalim Unit Handles: 8 00:10:06.174 Max Placement Identifiers: 128 00:10:06.174 Number of Namespaces Suppprted: 256 00:10:06.174 Reclaim unit Nominal Size: 6000000 bytes 00:10:06.174 Estimated Reclaim Unit Time Limit: Not Reported 00:10:06.174 RUH Desc #000: RUH Type: Initially Isolated 00:10:06.174 RUH Desc #001: RUH Type: Initially Isolated 00:10:06.174 RUH Desc #002: RUH Type: Initially Isolated 00:10:06.174 RUH Desc #003: RUH Type: Initially Isolated 00:10:06.174 RUH Desc #004: RUH Type: Initially Isolated 00:10:06.174 RUH Desc #005: RUH Type: Initially Isolated 00:10:06.174 RUH Desc #006: RUH Type: Initially Isolated 00:10:06.174 RUH Desc #007: RUH Type: Initially Isolated 00:10:06.174 00:10:06.174 FDP reclaim unit handle usage log page 00:10:06.174 ====================================== 00:10:06.174 Number of Reclaim Unit Handles: 8 00:10:06.174 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:06.174 RUH Usage Desc #001: RUH Attributes: Unused 00:10:06.174 RUH Usage Desc #002: RUH Attributes: Unused 00:10:06.174 RUH Usage Desc #003: RUH Attributes: Unused 00:10:06.174 RUH Usage Desc #004: RUH Attributes: Unused 00:10:06.174 RUH Usage Desc #005: RUH Attributes: Unused 00:10:06.174 RUH Usage Desc #006: RUH Attributes: Unused 00:10:06.174 RUH Usage Desc #007: RUH Attributes: Unused 00:10:06.174 00:10:06.174 FDP statistics log page 00:10:06.174 ======================= 00:10:06.174 Host bytes with metadata written: 2300162048 00:10:06.174 Media bytes with metadata written: 2304004096 00:10:06.174 Media bytes erased: 0 00:10:06.174 00:10:06.174 FDP Reclaim unit handle status 00:10:06.174 ============================== 00:10:06.174 Number of RUHS descriptors: 2 00:10:06.174 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000e65 00:10:06.174 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:06.174 00:10:06.174 FDP write on placement id: 0 success 00:10:06.174 00:10:06.174 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:06.174 00:10:06.174 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:06.174 00:10:06.174 Get Feature: FDP Events for Placement handle: #0 00:10:06.174 ======================== 00:10:06.174 Number of FDP Events: 6 00:10:06.174 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:06.174 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:06.174 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:06.174 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:06.174 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:06.175 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:06.175 00:10:06.175 FDP events log page 00:10:06.175 =================== 00:10:06.175 Number of FDP events: 1 00:10:06.175 FDP Event #0: 00:10:06.175 Event Type: RU Not Written to Capacity 00:10:06.175 Placement Identifier: Valid 00:10:06.175 NSID: Valid 00:10:06.175 Location: Valid 00:10:06.175 Placement Identifier: 0 00:10:06.175 Event Timestamp: 4 00:10:06.175 Namespace Identifier: 1 00:10:06.175 Reclaim Group Identifier: 0 00:10:06.175 Reclaim Unit Handle Identifier: 0 00:10:06.175 00:10:06.175 FDP test passed 00:10:06.175 00:10:06.175 real 0m0.220s 00:10:06.175 user 0m0.050s 00:10:06.175 sys 0m0.067s 00:10:06.175 14:17:47 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.175 14:17:47 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:06.175 ************************************ 00:10:06.175 END TEST nvme_flexible_data_placement 00:10:06.175 ************************************ 00:10:06.175 00:10:06.175 real 0m7.795s 00:10:06.175 user 0m1.030s 00:10:06.175 sys 0m1.470s 00:10:06.175 ************************************ 00:10:06.175 END TEST nvme_fdp 00:10:06.175 ************************************ 00:10:06.175 14:17:47 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.175 14:17:47 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:06.175 14:17:47 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:06.175 14:17:47 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:06.175 14:17:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:06.175 14:17:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.175 14:17:47 -- common/autotest_common.sh@10 -- # set +x 00:10:06.175 ************************************ 00:10:06.175 START TEST nvme_rpc 00:10:06.175 ************************************ 00:10:06.175 14:17:47 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:06.437 * Looking for test storage... 00:10:06.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.437 14:17:48 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:06.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.437 --rc genhtml_branch_coverage=1 00:10:06.437 --rc genhtml_function_coverage=1 00:10:06.437 --rc genhtml_legend=1 00:10:06.437 --rc geninfo_all_blocks=1 00:10:06.437 --rc geninfo_unexecuted_blocks=1 00:10:06.437 00:10:06.437 ' 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:06.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.437 --rc genhtml_branch_coverage=1 00:10:06.437 --rc genhtml_function_coverage=1 00:10:06.437 --rc genhtml_legend=1 00:10:06.437 --rc geninfo_all_blocks=1 00:10:06.437 --rc geninfo_unexecuted_blocks=1 00:10:06.437 00:10:06.437 ' 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:06.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.437 --rc genhtml_branch_coverage=1 00:10:06.437 --rc genhtml_function_coverage=1 00:10:06.437 --rc genhtml_legend=1 00:10:06.437 --rc geninfo_all_blocks=1 00:10:06.437 --rc geninfo_unexecuted_blocks=1 00:10:06.437 00:10:06.437 ' 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:06.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.437 --rc genhtml_branch_coverage=1 00:10:06.437 --rc genhtml_function_coverage=1 00:10:06.437 --rc genhtml_legend=1 00:10:06.437 --rc geninfo_all_blocks=1 00:10:06.437 --rc geninfo_unexecuted_blocks=1 00:10:06.437 00:10:06.437 ' 00:10:06.437 14:17:48 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.437 14:17:48 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:06.437 14:17:48 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:06.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.437 14:17:48 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:06.437 14:17:48 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77945 00:10:06.438 14:17:48 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:06.438 14:17:48 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:06.438 14:17:48 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77945 00:10:06.438 14:17:48 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 77945 ']' 00:10:06.438 14:17:48 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.438 14:17:48 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.438 14:17:48 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.438 14:17:48 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.438 14:17:48 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.699 [2024-11-29 14:17:48.255208] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:06.699 [2024-11-29 14:17:48.255354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77945 ] 00:10:06.699 [2024-11-29 14:17:48.408084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:06.699 [2024-11-29 14:17:48.458943] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.699 [2024-11-29 14:17:48.459007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.644 14:17:49 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.644 14:17:49 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:07.644 14:17:49 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:07.644 Nvme0n1 00:10:07.644 14:17:49 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:07.644 14:17:49 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:07.905 request: 00:10:07.905 { 00:10:07.905 "bdev_name": "Nvme0n1", 00:10:07.905 "filename": "non_existing_file", 00:10:07.905 "method": "bdev_nvme_apply_firmware", 00:10:07.905 "req_id": 1 00:10:07.905 } 00:10:07.905 Got JSON-RPC error response 00:10:07.905 response: 00:10:07.905 { 00:10:07.905 "code": -32603, 00:10:07.905 "message": "open file failed." 00:10:07.905 } 00:10:07.905 14:17:49 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:07.905 14:17:49 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:07.905 14:17:49 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:08.167 14:17:49 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:08.167 14:17:49 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77945 00:10:08.167 14:17:49 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 77945 ']' 00:10:08.167 14:17:49 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 77945 00:10:08.167 14:17:49 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:08.167 14:17:49 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.167 14:17:49 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77945 00:10:08.167 killing process with pid 77945 00:10:08.167 14:17:49 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:08.167 14:17:49 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:08.167 14:17:49 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77945' 00:10:08.167 14:17:49 nvme_rpc -- common/autotest_common.sh@969 -- # kill 77945 00:10:08.167 14:17:49 nvme_rpc -- common/autotest_common.sh@974 -- # wait 77945 00:10:08.739 ************************************ 00:10:08.739 END TEST nvme_rpc 00:10:08.739 ************************************ 00:10:08.739 00:10:08.739 real 0m2.300s 00:10:08.739 user 0m4.340s 00:10:08.739 sys 0m0.600s 00:10:08.739 14:17:50 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.739 14:17:50 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.739 14:17:50 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:08.739 14:17:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:08.739 14:17:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.739 14:17:50 -- common/autotest_common.sh@10 -- # set +x 00:10:08.739 ************************************ 00:10:08.739 START TEST nvme_rpc_timeouts 00:10:08.739 ************************************ 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:08.739 * Looking for test storage... 00:10:08.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.739 14:17:50 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:08.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.739 --rc genhtml_branch_coverage=1 00:10:08.739 --rc genhtml_function_coverage=1 00:10:08.739 --rc genhtml_legend=1 00:10:08.739 --rc geninfo_all_blocks=1 00:10:08.739 --rc geninfo_unexecuted_blocks=1 00:10:08.739 00:10:08.739 ' 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:08.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.739 --rc genhtml_branch_coverage=1 00:10:08.739 --rc genhtml_function_coverage=1 00:10:08.739 --rc genhtml_legend=1 00:10:08.739 --rc geninfo_all_blocks=1 00:10:08.739 --rc geninfo_unexecuted_blocks=1 00:10:08.739 00:10:08.739 ' 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:08.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.739 --rc genhtml_branch_coverage=1 00:10:08.739 --rc genhtml_function_coverage=1 00:10:08.739 --rc genhtml_legend=1 00:10:08.739 --rc geninfo_all_blocks=1 00:10:08.739 --rc geninfo_unexecuted_blocks=1 00:10:08.739 00:10:08.739 ' 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:08.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.739 --rc genhtml_branch_coverage=1 00:10:08.739 --rc genhtml_function_coverage=1 00:10:08.739 --rc genhtml_legend=1 00:10:08.739 --rc geninfo_all_blocks=1 00:10:08.739 --rc geninfo_unexecuted_blocks=1 00:10:08.739 00:10:08.739 ' 00:10:08.739 14:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:08.739 14:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78000 00:10:08.739 14:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78000 00:10:08.739 14:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78032 00:10:08.739 14:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:08.739 14:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78032 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 78032 ']' 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.739 14:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.739 14:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:09.001 [2024-11-29 14:17:50.560181] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:09.001 [2024-11-29 14:17:50.560360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78032 ] 00:10:09.001 [2024-11-29 14:17:50.714695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:09.001 [2024-11-29 14:17:50.765765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.001 [2024-11-29 14:17:50.765791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.946 Checking default timeout settings: 00:10:09.946 14:17:51 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.947 14:17:51 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:09.947 14:17:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:09.947 14:17:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:10.208 Making settings changes with rpc: 00:10:10.208 14:17:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:10.208 14:17:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:10.208 Check default vs. modified settings: 00:10:10.208 14:17:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:10.208 14:17:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78000 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78000 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.779 Setting action_on_timeout is changed as expected. 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78000 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78000 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.779 Setting timeout_us is changed as expected. 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78000 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78000 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.779 Setting timeout_admin_us is changed as expected. 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78000 /tmp/settings_modified_78000 00:10:10.779 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78032 00:10:10.779 14:17:52 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 78032 ']' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 78032 00:10:10.779 14:17:52 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:10.779 14:17:52 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78032 00:10:10.779 killing process with pid 78032 00:10:10.779 14:17:52 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.779 14:17:52 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78032' 00:10:10.779 14:17:52 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 78032 00:10:10.779 14:17:52 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 78032 00:10:11.351 RPC TIMEOUT SETTING TEST PASSED. 00:10:11.351 14:17:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:11.351 00:10:11.351 real 0m2.590s 00:10:11.351 user 0m4.973s 00:10:11.351 sys 0m0.631s 00:10:11.351 ************************************ 00:10:11.351 END TEST nvme_rpc_timeouts 00:10:11.351 ************************************ 00:10:11.351 14:17:52 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.351 14:17:52 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:11.351 14:17:52 -- spdk/autotest.sh@239 -- # uname -s 00:10:11.351 14:17:52 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:11.351 14:17:52 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:11.351 14:17:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:11.351 14:17:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.351 14:17:52 -- common/autotest_common.sh@10 -- # set +x 00:10:11.351 ************************************ 00:10:11.351 START TEST sw_hotplug 00:10:11.351 ************************************ 00:10:11.351 14:17:52 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:11.351 * Looking for test storage... 00:10:11.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:11.351 14:17:53 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:11.351 14:17:53 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:10:11.351 14:17:53 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:11.351 14:17:53 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.351 14:17:53 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:11.351 14:17:53 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.351 14:17:53 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:11.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.351 --rc genhtml_branch_coverage=1 00:10:11.351 --rc genhtml_function_coverage=1 00:10:11.351 --rc genhtml_legend=1 00:10:11.351 --rc geninfo_all_blocks=1 00:10:11.351 --rc geninfo_unexecuted_blocks=1 00:10:11.351 00:10:11.351 ' 00:10:11.351 14:17:53 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:11.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.351 --rc genhtml_branch_coverage=1 00:10:11.351 --rc genhtml_function_coverage=1 00:10:11.351 --rc genhtml_legend=1 00:10:11.351 --rc geninfo_all_blocks=1 00:10:11.351 --rc geninfo_unexecuted_blocks=1 00:10:11.351 00:10:11.351 ' 00:10:11.351 14:17:53 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:11.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.351 --rc genhtml_branch_coverage=1 00:10:11.351 --rc genhtml_function_coverage=1 00:10:11.351 --rc genhtml_legend=1 00:10:11.351 --rc geninfo_all_blocks=1 00:10:11.351 --rc geninfo_unexecuted_blocks=1 00:10:11.351 00:10:11.351 ' 00:10:11.351 14:17:53 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:11.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.351 --rc genhtml_branch_coverage=1 00:10:11.351 --rc genhtml_function_coverage=1 00:10:11.351 --rc genhtml_legend=1 00:10:11.351 --rc geninfo_all_blocks=1 00:10:11.351 --rc geninfo_unexecuted_blocks=1 00:10:11.351 00:10:11.351 ' 00:10:11.351 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:11.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:11.922 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:11.922 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:11.922 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:11.922 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:11.922 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:11.922 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:11.922 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:11.922 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:11.922 14:17:53 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:11.922 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:11.922 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:11.922 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:12.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:12.442 Waiting for block devices as requested 00:10:12.442 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:12.703 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:12.703 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:12.703 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.015 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:18.015 14:17:59 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:18.015 14:17:59 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:18.276 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:18.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:18.276 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:18.537 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:18.799 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:18.799 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:19.061 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:19.061 14:18:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:19.061 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:19.061 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:19.061 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78882 00:10:19.061 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:19.061 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:19.061 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:19.061 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:19.061 14:18:00 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:19.061 14:18:00 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:19.061 14:18:00 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:19.061 14:18:00 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:19.061 14:18:00 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:19.061 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:19.061 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:19.061 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:19.061 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:19.061 14:18:00 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:19.323 Initializing NVMe Controllers 00:10:19.323 Attaching to 0000:00:10.0 00:10:19.323 Attaching to 0000:00:11.0 00:10:19.323 Attached to 0000:00:11.0 00:10:19.323 Attached to 0000:00:10.0 00:10:19.323 Initialization complete. Starting I/O... 00:10:19.323 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:19.324 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:19.324 00:10:20.268 QEMU NVMe Ctrl (12341 ): 2489 I/Os completed (+2489) 00:10:20.268 QEMU NVMe Ctrl (12340 ): 2492 I/Os completed (+2492) 00:10:20.268 00:10:21.213 QEMU NVMe Ctrl (12341 ): 5605 I/Os completed (+3116) 00:10:21.213 QEMU NVMe Ctrl (12340 ): 5608 I/Os completed (+3116) 00:10:21.213 00:10:22.157 QEMU NVMe Ctrl (12341 ): 8705 I/Os completed (+3100) 00:10:22.157 QEMU NVMe Ctrl (12340 ): 8708 I/Os completed (+3100) 00:10:22.157 00:10:23.102 QEMU NVMe Ctrl (12341 ): 11801 I/Os completed (+3096) 00:10:23.102 QEMU NVMe Ctrl (12340 ): 11800 I/Os completed (+3092) 00:10:23.102 00:10:24.489 QEMU NVMe Ctrl (12341 ): 14989 I/Os completed (+3188) 00:10:24.489 QEMU NVMe Ctrl (12340 ): 15054 I/Os completed (+3254) 00:10:24.489 00:10:25.063 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:25.063 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.063 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.063 [2024-11-29 14:18:06.719141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:25.063 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:25.063 [2024-11-29 14:18:06.720651] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 [2024-11-29 14:18:06.720815] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 [2024-11-29 14:18:06.720859] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 [2024-11-29 14:18:06.720891] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:25.063 [2024-11-29 14:18:06.722313] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 [2024-11-29 14:18:06.722550] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 [2024-11-29 14:18:06.722637] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 [2024-11-29 14:18:06.722675] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.063 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.063 [2024-11-29 14:18:06.765946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:25.063 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:25.063 [2024-11-29 14:18:06.767241] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 [2024-11-29 14:18:06.767301] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 [2024-11-29 14:18:06.767320] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 [2024-11-29 14:18:06.767335] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:25.063 [2024-11-29 14:18:06.768562] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 [2024-11-29 14:18:06.768597] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 [2024-11-29 14:18:06.768616] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 [2024-11-29 14:18:06.768629] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.063 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:25.063 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:25.324 00:10:25.324 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.324 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.324 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:25.324 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:25.324 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:25.324 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.324 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.324 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:25.324 Attaching to 0000:00:10.0 00:10:25.324 Attached to 0000:00:10.0 00:10:25.324 14:18:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:25.324 14:18:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:25.324 14:18:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:25.324 Attaching to 0000:00:11.0 00:10:25.324 Attached to 0000:00:11.0 00:10:26.265 QEMU NVMe Ctrl (12340 ): 2819 I/Os completed (+2819) 00:10:26.265 QEMU NVMe Ctrl (12341 ): 2563 I/Os completed (+2563) 00:10:26.265 00:10:27.211 QEMU NVMe Ctrl (12340 ): 6023 I/Os completed (+3204) 00:10:27.211 QEMU NVMe Ctrl (12341 ): 5785 I/Os completed (+3222) 00:10:27.211 00:10:28.157 QEMU NVMe Ctrl (12340 ): 9311 I/Os completed (+3288) 00:10:28.157 QEMU NVMe Ctrl (12341 ): 9073 I/Os completed (+3288) 00:10:28.157 00:10:29.101 QEMU NVMe Ctrl (12340 ): 13528 I/Os completed (+4217) 00:10:29.101 QEMU NVMe Ctrl (12341 ): 13280 I/Os completed (+4207) 00:10:29.101 00:10:30.484 QEMU NVMe Ctrl (12340 ): 17841 I/Os completed (+4313) 00:10:30.484 QEMU NVMe Ctrl (12341 ): 17591 I/Os completed (+4311) 00:10:30.484 00:10:31.426 QEMU NVMe Ctrl (12340 ): 22044 I/Os completed (+4203) 00:10:31.426 QEMU NVMe Ctrl (12341 ): 21807 I/Os completed (+4216) 00:10:31.426 00:10:32.370 QEMU NVMe Ctrl (12340 ): 26267 I/Os completed (+4223) 00:10:32.370 QEMU NVMe Ctrl (12341 ): 26028 I/Os completed (+4221) 00:10:32.370 00:10:33.420 QEMU NVMe Ctrl (12340 ): 30474 I/Os completed (+4207) 00:10:33.420 QEMU NVMe Ctrl (12341 ): 30235 I/Os completed (+4207) 00:10:33.420 00:10:34.363 QEMU NVMe Ctrl (12340 ): 34672 I/Os completed (+4198) 00:10:34.363 QEMU NVMe Ctrl (12341 ): 34447 I/Os completed (+4212) 00:10:34.363 00:10:35.305 QEMU NVMe Ctrl (12340 ): 38863 I/Os completed (+4191) 00:10:35.305 QEMU NVMe Ctrl (12341 ): 38633 I/Os completed (+4186) 00:10:35.305 00:10:36.246 QEMU NVMe Ctrl (12340 ): 43207 I/Os completed (+4344) 00:10:36.246 QEMU NVMe Ctrl (12341 ): 42981 I/Os completed (+4348) 00:10:36.246 00:10:37.189 QEMU NVMe Ctrl (12340 ): 47427 I/Os completed (+4220) 00:10:37.189 QEMU NVMe Ctrl (12341 ): 47189 I/Os completed (+4208) 00:10:37.189 00:10:37.450 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:37.450 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:37.450 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:37.450 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:37.450 [2024-11-29 14:18:19.101927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:37.450 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:37.450 [2024-11-29 14:18:19.102805] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 [2024-11-29 14:18:19.102914] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 [2024-11-29 14:18:19.102945] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 [2024-11-29 14:18:19.103006] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:37.450 [2024-11-29 14:18:19.104046] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 [2024-11-29 14:18:19.104148] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 [2024-11-29 14:18:19.104175] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 [2024-11-29 14:18:19.104197] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:37.450 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:37.450 [2024-11-29 14:18:19.123428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:37.450 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:37.450 [2024-11-29 14:18:19.124276] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 [2024-11-29 14:18:19.124328] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 [2024-11-29 14:18:19.124355] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 [2024-11-29 14:18:19.124431] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:37.450 [2024-11-29 14:18:19.125324] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 [2024-11-29 14:18:19.125397] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 [2024-11-29 14:18:19.125425] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 [2024-11-29 14:18:19.125473] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.450 EAL: Cannot open sysfs resource 00:10:37.450 EAL: pci_scan_one(): cannot parse resource 00:10:37.450 EAL: Scan for (pci) bus failed. 00:10:37.450 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:37.450 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:37.450 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:37.450 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:37.450 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:37.712 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:37.712 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:37.712 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:37.712 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:37.712 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:37.712 Attaching to 0000:00:10.0 00:10:37.712 Attached to 0000:00:10.0 00:10:37.712 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:37.712 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:37.712 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:37.712 Attaching to 0000:00:11.0 00:10:37.712 Attached to 0000:00:11.0 00:10:38.284 QEMU NVMe Ctrl (12340 ): 2727 I/Os completed (+2727) 00:10:38.284 QEMU NVMe Ctrl (12341 ): 2356 I/Os completed (+2356) 00:10:38.284 00:10:39.227 QEMU NVMe Ctrl (12340 ): 6933 I/Os completed (+4206) 00:10:39.227 QEMU NVMe Ctrl (12341 ): 6550 I/Os completed (+4194) 00:10:39.227 00:10:40.169 QEMU NVMe Ctrl (12340 ): 11343 I/Os completed (+4410) 00:10:40.169 QEMU NVMe Ctrl (12341 ): 10954 I/Os completed (+4404) 00:10:40.169 00:10:41.112 QEMU NVMe Ctrl (12340 ): 15477 I/Os completed (+4134) 00:10:41.112 QEMU NVMe Ctrl (12341 ): 15097 I/Os completed (+4143) 00:10:41.112 00:10:42.497 QEMU NVMe Ctrl (12340 ): 18669 I/Os completed (+3192) 00:10:42.498 QEMU NVMe Ctrl (12341 ): 18302 I/Os completed (+3205) 00:10:42.498 00:10:43.439 QEMU NVMe Ctrl (12340 ): 21657 I/Os completed (+2988) 00:10:43.439 QEMU NVMe Ctrl (12341 ): 21294 I/Os completed (+2992) 00:10:43.439 00:10:44.401 QEMU NVMe Ctrl (12340 ): 24702 I/Os completed (+3045) 00:10:44.401 QEMU NVMe Ctrl (12341 ): 24344 I/Os completed (+3050) 00:10:44.401 00:10:45.346 QEMU NVMe Ctrl (12340 ): 27738 I/Os completed (+3036) 00:10:45.346 QEMU NVMe Ctrl (12341 ): 27390 I/Os completed (+3046) 00:10:45.346 00:10:46.290 QEMU NVMe Ctrl (12340 ): 30827 I/Os completed (+3089) 00:10:46.290 QEMU NVMe Ctrl (12341 ): 30468 I/Os completed (+3078) 00:10:46.290 00:10:47.233 QEMU NVMe Ctrl (12340 ): 34239 I/Os completed (+3412) 00:10:47.233 QEMU NVMe Ctrl (12341 ): 33883 I/Os completed (+3415) 00:10:47.233 00:10:48.177 QEMU NVMe Ctrl (12340 ): 37239 I/Os completed (+3000) 00:10:48.177 QEMU NVMe Ctrl (12341 ): 36883 I/Os completed (+3000) 00:10:48.177 00:10:49.121 QEMU NVMe Ctrl (12340 ): 40312 I/Os completed (+3073) 00:10:49.121 QEMU NVMe Ctrl (12341 ): 39956 I/Os completed (+3073) 00:10:49.121 00:10:49.692 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:49.692 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:49.692 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.692 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.693 [2024-11-29 14:18:31.370632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:49.693 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:49.693 [2024-11-29 14:18:31.371530] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 [2024-11-29 14:18:31.371631] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 [2024-11-29 14:18:31.371662] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 [2024-11-29 14:18:31.371719] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:49.693 [2024-11-29 14:18:31.372814] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 [2024-11-29 14:18:31.372869] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 [2024-11-29 14:18:31.372896] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 [2024-11-29 14:18:31.372928] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.693 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.693 [2024-11-29 14:18:31.394146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:49.693 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:49.693 [2024-11-29 14:18:31.394983] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 [2024-11-29 14:18:31.395081] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 [2024-11-29 14:18:31.395139] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 [2024-11-29 14:18:31.395166] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:49.693 [2024-11-29 14:18:31.396101] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 [2024-11-29 14:18:31.396228] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 [2024-11-29 14:18:31.396258] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 [2024-11-29 14:18:31.396306] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.693 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:49.693 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:49.693 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:49.693 EAL: Scan for (pci) bus failed. 00:10:49.693 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:49.693 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:49.693 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:49.953 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:49.953 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:49.953 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:49.953 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:49.953 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:49.953 Attaching to 0000:00:10.0 00:10:49.953 Attached to 0000:00:10.0 00:10:49.953 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:49.953 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:49.953 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:49.953 Attaching to 0000:00:11.0 00:10:49.953 Attached to 0000:00:11.0 00:10:49.953 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:49.953 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:49.953 [2024-11-29 14:18:31.636424] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:02.189 14:18:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:02.189 14:18:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:02.189 14:18:43 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.92 00:11:02.189 14:18:43 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.92 00:11:02.189 14:18:43 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:02.189 14:18:43 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.92 00:11:02.189 14:18:43 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.92 2 00:11:02.189 remove_attach_helper took 42.92s to complete (handling 2 nvme drive(s)) 14:18:43 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:08.818 14:18:49 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78882 00:11:08.818 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78882) - No such process 00:11:08.818 14:18:49 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78882 00:11:08.818 14:18:49 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:08.818 14:18:49 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:08.818 14:18:49 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:08.818 14:18:49 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79431 00:11:08.818 14:18:49 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:08.818 14:18:49 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:08.818 14:18:49 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79431 00:11:08.818 14:18:49 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 79431 ']' 00:11:08.818 14:18:49 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.818 14:18:49 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.818 14:18:49 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.818 14:18:49 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.818 14:18:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.818 [2024-11-29 14:18:49.718817] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:08.818 [2024-11-29 14:18:49.718967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79431 ] 00:11:08.818 [2024-11-29 14:18:49.871810] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.818 [2024-11-29 14:18:49.920389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.818 14:18:50 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.818 14:18:50 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:08.818 14:18:50 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:08.818 14:18:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.818 14:18:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.818 14:18:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.818 14:18:50 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:08.818 14:18:50 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:08.818 14:18:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:08.818 14:18:50 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:08.818 14:18:50 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:08.818 14:18:50 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:08.818 14:18:50 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:08.818 14:18:50 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:08.818 14:18:50 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:08.818 14:18:50 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:08.818 14:18:50 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:08.818 14:18:50 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:08.818 14:18:50 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:15.437 14:18:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:15.437 14:18:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.437 14:18:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.437 14:18:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.437 14:18:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.437 14:18:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:15.437 14:18:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.437 14:18:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.437 14:18:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.437 14:18:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.437 14:18:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.437 14:18:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.437 14:18:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.437 14:18:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.437 14:18:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:15.437 14:18:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:15.437 [2024-11-29 14:18:56.656720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:15.437 [2024-11-29 14:18:56.657779] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.437 [2024-11-29 14:18:56.657815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.437 [2024-11-29 14:18:56.657830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.437 [2024-11-29 14:18:56.657842] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.437 [2024-11-29 14:18:56.657852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.437 [2024-11-29 14:18:56.657859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.437 [2024-11-29 14:18:56.657869] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.437 [2024-11-29 14:18:56.657875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.437 [2024-11-29 14:18:56.657883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.437 [2024-11-29 14:18:56.657889] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.437 [2024-11-29 14:18:56.657896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.437 [2024-11-29 14:18:56.657903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.437 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:15.437 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.437 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.437 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.437 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.437 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.437 14:18:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.437 14:18:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.437 14:18:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.437 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:15.437 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:15.721 [2024-11-29 14:18:57.356736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:15.721 [2024-11-29 14:18:57.357766] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.721 [2024-11-29 14:18:57.357798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.721 [2024-11-29 14:18:57.357807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.721 [2024-11-29 14:18:57.357819] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.721 [2024-11-29 14:18:57.357828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.721 [2024-11-29 14:18:57.357836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.721 [2024-11-29 14:18:57.357843] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.721 [2024-11-29 14:18:57.357851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.721 [2024-11-29 14:18:57.357857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.721 [2024-11-29 14:18:57.357867] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.721 [2024-11-29 14:18:57.357873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.721 [2024-11-29 14:18:57.357881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.982 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:15.982 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.982 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.982 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.982 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.982 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.982 14:18:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.982 14:18:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.982 14:18:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.982 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:15.982 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:16.243 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.243 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.243 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:16.243 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:16.243 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.243 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.243 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.243 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:16.243 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:16.243 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.243 14:18:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:28.475 14:19:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:28.475 14:19:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:28.475 14:19:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:28.475 14:19:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.475 14:19:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.475 14:19:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.475 14:19:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.475 14:19:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.475 14:19:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.475 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:28.475 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:28.475 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:28.475 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:28.475 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:28.475 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:28.475 [2024-11-29 14:19:10.056962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:28.475 [2024-11-29 14:19:10.058181] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.475 [2024-11-29 14:19:10.058216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.475 [2024-11-29 14:19:10.058227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.475 [2024-11-29 14:19:10.058239] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.475 [2024-11-29 14:19:10.058252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.475 [2024-11-29 14:19:10.058260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.475 [2024-11-29 14:19:10.058268] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.475 [2024-11-29 14:19:10.058275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.475 [2024-11-29 14:19:10.058283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.475 [2024-11-29 14:19:10.058304] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.475 [2024-11-29 14:19:10.058312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.475 [2024-11-29 14:19:10.058319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.475 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:28.475 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:28.475 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:28.475 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.475 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.475 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.475 14:19:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.475 14:19:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.475 14:19:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.475 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:28.475 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:28.736 [2024-11-29 14:19:10.456966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:28.736 [2024-11-29 14:19:10.458001] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.736 [2024-11-29 14:19:10.458034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.736 [2024-11-29 14:19:10.458044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.736 [2024-11-29 14:19:10.458057] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.736 [2024-11-29 14:19:10.458064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.736 [2024-11-29 14:19:10.458072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.736 [2024-11-29 14:19:10.458080] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.736 [2024-11-29 14:19:10.458088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.736 [2024-11-29 14:19:10.458094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.736 [2024-11-29 14:19:10.458102] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.736 [2024-11-29 14:19:10.458108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.736 [2024-11-29 14:19:10.458116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.997 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:28.997 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:28.997 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:28.997 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.997 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.997 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.997 14:19:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.997 14:19:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.997 14:19:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.997 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:28.997 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:28.997 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:28.997 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:28.998 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:28.998 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:29.259 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.259 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:29.259 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:29.259 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:29.259 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:29.259 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.259 14:19:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:41.487 14:19:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:41.487 14:19:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:41.487 14:19:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:41.487 14:19:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.487 14:19:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.487 [2024-11-29 14:19:22.957186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:41.487 [2024-11-29 14:19:22.958236] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.487 [2024-11-29 14:19:22.958270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.487 [2024-11-29 14:19:22.958284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.487 [2024-11-29 14:19:22.958295] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.487 [2024-11-29 14:19:22.958304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.487 [2024-11-29 14:19:22.958311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.487 [2024-11-29 14:19:22.958319] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.487 [2024-11-29 14:19:22.958325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.487 [2024-11-29 14:19:22.958333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.487 [2024-11-29 14:19:22.958340] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.487 [2024-11-29 14:19:22.958347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.487 [2024-11-29 14:19:22.958354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.487 14:19:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:41.487 14:19:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:41.748 [2024-11-29 14:19:23.357184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:41.748 [2024-11-29 14:19:23.358181] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.748 [2024-11-29 14:19:23.358213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.749 [2024-11-29 14:19:23.358222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.749 [2024-11-29 14:19:23.358233] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.749 [2024-11-29 14:19:23.358241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.749 [2024-11-29 14:19:23.358250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.749 [2024-11-29 14:19:23.358257] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.749 [2024-11-29 14:19:23.358265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.749 [2024-11-29 14:19:23.358271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.749 [2024-11-29 14:19:23.358279] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.749 [2024-11-29 14:19:23.358285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.749 [2024-11-29 14:19:23.358293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.749 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:41.749 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:41.749 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:41.749 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:41.749 14:19:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.749 14:19:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.749 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:41.749 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:41.749 14:19:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.749 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:41.749 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:42.010 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.010 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.010 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:42.010 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:42.010 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:42.010 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.010 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.010 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:42.010 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:42.010 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:42.010 14:19:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:54.242 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:54.242 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:54.242 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.23 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.23 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.23 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.23 2 00:11:54.243 remove_attach_helper took 45.23s to complete (handling 2 nvme drive(s)) 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:54.243 14:19:35 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:54.243 14:19:35 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:00.811 14:19:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:00.811 14:19:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.811 14:19:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.811 14:19:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.811 14:19:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.811 14:19:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:00.811 14:19:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.812 14:19:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.812 14:19:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.812 14:19:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.812 14:19:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.812 14:19:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.812 14:19:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.812 14:19:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.812 14:19:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:00.812 14:19:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:00.812 [2024-11-29 14:19:41.921522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:00.812 [2024-11-29 14:19:41.922701] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.812 [2024-11-29 14:19:41.922734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.812 [2024-11-29 14:19:41.922746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.812 [2024-11-29 14:19:41.922758] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.812 [2024-11-29 14:19:41.922766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.812 [2024-11-29 14:19:41.922773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.812 [2024-11-29 14:19:41.922781] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.812 [2024-11-29 14:19:41.922788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.812 [2024-11-29 14:19:41.922797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.812 [2024-11-29 14:19:41.922803] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.812 [2024-11-29 14:19:41.922810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.812 [2024-11-29 14:19:41.922816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.812 [2024-11-29 14:19:42.321520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:00.812 [2024-11-29 14:19:42.322244] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.812 [2024-11-29 14:19:42.322276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.812 [2024-11-29 14:19:42.322286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.812 [2024-11-29 14:19:42.322297] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.812 [2024-11-29 14:19:42.322304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.812 [2024-11-29 14:19:42.322313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.812 [2024-11-29 14:19:42.322319] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.812 [2024-11-29 14:19:42.322327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.812 [2024-11-29 14:19:42.322334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.812 [2024-11-29 14:19:42.322341] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.812 [2024-11-29 14:19:42.322348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.812 [2024-11-29 14:19:42.322357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.812 14:19:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.812 14:19:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.812 14:19:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.812 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:01.072 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:01.072 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.072 14:19:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:13.299 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.300 14:19:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.300 14:19:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.300 14:19:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:13.300 [2024-11-29 14:19:54.721726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:13.300 [2024-11-29 14:19:54.722982] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.300 [2024-11-29 14:19:54.723036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.300 [2024-11-29 14:19:54.723071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.300 [2024-11-29 14:19:54.723100] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.300 [2024-11-29 14:19:54.723170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.300 [2024-11-29 14:19:54.723197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.300 [2024-11-29 14:19:54.723251] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.300 [2024-11-29 14:19:54.723271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.300 [2024-11-29 14:19:54.723322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.300 [2024-11-29 14:19:54.723348] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.300 [2024-11-29 14:19:54.723365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.300 [2024-11-29 14:19:54.723411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.300 14:19:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.300 14:19:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.300 14:19:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:13.300 14:19:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:13.570 [2024-11-29 14:19:55.221734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:13.570 [2024-11-29 14:19:55.222740] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.570 [2024-11-29 14:19:55.222775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.570 [2024-11-29 14:19:55.222785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.570 [2024-11-29 14:19:55.222796] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.570 [2024-11-29 14:19:55.222803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.570 [2024-11-29 14:19:55.222811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.570 [2024-11-29 14:19:55.222818] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.570 [2024-11-29 14:19:55.222827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.570 [2024-11-29 14:19:55.222833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.570 [2024-11-29 14:19:55.222840] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.570 [2024-11-29 14:19:55.222847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.570 [2024-11-29 14:19:55.222855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.570 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:13.570 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:13.570 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:13.570 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.570 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.570 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.570 14:19:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.570 14:19:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.570 14:19:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.570 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:13.570 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:13.831 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:13.831 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:13.831 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:13.831 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:13.831 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:13.831 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:13.831 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:13.831 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:13.832 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:13.832 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:13.832 14:19:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:26.069 14:20:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.069 14:20:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.069 14:20:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:26.069 [2024-11-29 14:20:07.621965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:26.069 [2024-11-29 14:20:07.623029] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.069 [2024-11-29 14:20:07.623139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.069 [2024-11-29 14:20:07.623204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.069 [2024-11-29 14:20:07.623264] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.069 [2024-11-29 14:20:07.623289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.069 [2024-11-29 14:20:07.623338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.069 [2024-11-29 14:20:07.623366] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.069 [2024-11-29 14:20:07.623410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.069 [2024-11-29 14:20:07.623438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.069 [2024-11-29 14:20:07.623486] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.069 [2024-11-29 14:20:07.623517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.069 [2024-11-29 14:20:07.623570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:26.069 14:20:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.069 14:20:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.069 14:20:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:26.069 14:20:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:26.331 [2024-11-29 14:20:08.121979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:26.331 [2024-11-29 14:20:08.122979] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.331 [2024-11-29 14:20:08.123011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.331 [2024-11-29 14:20:08.123020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.331 [2024-11-29 14:20:08.123030] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.331 [2024-11-29 14:20:08.123038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.331 [2024-11-29 14:20:08.123046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.331 [2024-11-29 14:20:08.123053] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.331 [2024-11-29 14:20:08.123062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.331 [2024-11-29 14:20:08.123069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.331 [2024-11-29 14:20:08.123078] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.331 [2024-11-29 14:20:08.123084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.331 [2024-11-29 14:20:08.123092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:26.593 14:20:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.593 14:20:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.593 14:20:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:26.593 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:26.853 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:26.853 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:26.853 14:20:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:39.095 14:20:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:39.095 14:20:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:39.095 14:20:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:39.095 14:20:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:39.095 14:20:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:39.095 14:20:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.095 14:20:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:39.095 14:20:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.65 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.65 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:39.095 14:20:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.65 00:12:39.095 14:20:20 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.65 2 00:12:39.095 remove_attach_helper took 44.65s to complete (handling 2 nvme drive(s)) 14:20:20 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:39.095 14:20:20 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79431 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 79431 ']' 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 79431 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79431 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79431' 00:12:39.095 killing process with pid 79431 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@969 -- # kill 79431 00:12:39.095 14:20:20 sw_hotplug -- common/autotest_common.sh@974 -- # wait 79431 00:12:39.095 14:20:20 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:39.356 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:40.017 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:40.017 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:40.017 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:40.017 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:40.017 ************************************ 00:12:40.017 END TEST sw_hotplug 00:12:40.017 ************************************ 00:12:40.017 00:12:40.017 real 2m28.730s 00:12:40.017 user 1m49.044s 00:12:40.017 sys 0m18.127s 00:12:40.017 14:20:21 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.017 14:20:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:40.017 14:20:21 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:40.017 14:20:21 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:40.017 14:20:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:40.017 14:20:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.017 14:20:21 -- common/autotest_common.sh@10 -- # set +x 00:12:40.017 ************************************ 00:12:40.017 START TEST nvme_xnvme 00:12:40.017 ************************************ 00:12:40.017 14:20:21 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:40.299 * Looking for test storage... 00:12:40.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:40.299 14:20:21 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:40.299 14:20:21 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:40.299 14:20:21 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:40.299 14:20:21 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.299 14:20:21 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:40.299 14:20:21 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.299 14:20:21 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:40.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.299 --rc genhtml_branch_coverage=1 00:12:40.299 --rc genhtml_function_coverage=1 00:12:40.299 --rc genhtml_legend=1 00:12:40.299 --rc geninfo_all_blocks=1 00:12:40.299 --rc geninfo_unexecuted_blocks=1 00:12:40.299 00:12:40.299 ' 00:12:40.299 14:20:21 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:40.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.300 --rc genhtml_branch_coverage=1 00:12:40.300 --rc genhtml_function_coverage=1 00:12:40.300 --rc genhtml_legend=1 00:12:40.300 --rc geninfo_all_blocks=1 00:12:40.300 --rc geninfo_unexecuted_blocks=1 00:12:40.300 00:12:40.300 ' 00:12:40.300 14:20:21 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:40.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.300 --rc genhtml_branch_coverage=1 00:12:40.300 --rc genhtml_function_coverage=1 00:12:40.300 --rc genhtml_legend=1 00:12:40.300 --rc geninfo_all_blocks=1 00:12:40.300 --rc geninfo_unexecuted_blocks=1 00:12:40.300 00:12:40.300 ' 00:12:40.300 14:20:21 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:40.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.300 --rc genhtml_branch_coverage=1 00:12:40.300 --rc genhtml_function_coverage=1 00:12:40.300 --rc genhtml_legend=1 00:12:40.300 --rc geninfo_all_blocks=1 00:12:40.300 --rc geninfo_unexecuted_blocks=1 00:12:40.300 00:12:40.300 ' 00:12:40.300 14:20:21 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:40.300 14:20:21 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.300 14:20:21 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.300 14:20:21 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.300 14:20:21 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.300 14:20:21 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.300 14:20:21 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.300 14:20:21 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.300 14:20:21 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:40.300 14:20:21 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.300 14:20:21 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:40.300 14:20:21 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:40.300 14:20:21 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.300 14:20:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:40.300 ************************************ 00:12:40.300 START TEST xnvme_to_malloc_dd_copy 00:12:40.300 ************************************ 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:40.300 14:20:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:40.300 { 00:12:40.300 "subsystems": [ 00:12:40.300 { 00:12:40.300 "subsystem": "bdev", 00:12:40.300 "config": [ 00:12:40.300 { 00:12:40.300 "params": { 00:12:40.300 "block_size": 512, 00:12:40.300 "num_blocks": 2097152, 00:12:40.300 "name": "malloc0" 00:12:40.300 }, 00:12:40.300 "method": "bdev_malloc_create" 00:12:40.300 }, 00:12:40.300 { 00:12:40.300 "params": { 00:12:40.300 "io_mechanism": "libaio", 00:12:40.300 "filename": "/dev/nullb0", 00:12:40.300 "name": "null0" 00:12:40.300 }, 00:12:40.300 "method": "bdev_xnvme_create" 00:12:40.300 }, 00:12:40.300 { 00:12:40.300 "method": "bdev_wait_for_examine" 00:12:40.300 } 00:12:40.300 ] 00:12:40.300 } 00:12:40.300 ] 00:12:40.300 } 00:12:40.300 [2024-11-29 14:20:22.030688] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:40.300 [2024-11-29 14:20:22.030973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80794 ] 00:12:40.562 [2024-11-29 14:20:22.182972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.562 [2024-11-29 14:20:22.233338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.950  [2024-11-29T14:20:24.688Z] Copying: 222/1024 [MB] (222 MBps) [2024-11-29T14:20:25.643Z] Copying: 447/1024 [MB] (224 MBps) [2024-11-29T14:20:26.585Z] Copying: 735/1024 [MB] (288 MBps) [2024-11-29T14:20:27.156Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:12:45.362 00:12:45.362 14:20:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:45.362 14:20:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:45.362 14:20:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:45.362 14:20:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:45.362 { 00:12:45.362 "subsystems": [ 00:12:45.362 { 00:12:45.362 "subsystem": "bdev", 00:12:45.362 "config": [ 00:12:45.362 { 00:12:45.362 "params": { 00:12:45.362 "block_size": 512, 00:12:45.362 "num_blocks": 2097152, 00:12:45.362 "name": "malloc0" 00:12:45.362 }, 00:12:45.362 "method": "bdev_malloc_create" 00:12:45.362 }, 00:12:45.362 { 00:12:45.362 "params": { 00:12:45.362 "io_mechanism": "libaio", 00:12:45.362 "filename": "/dev/nullb0", 00:12:45.362 "name": "null0" 00:12:45.362 }, 00:12:45.362 "method": "bdev_xnvme_create" 00:12:45.362 }, 00:12:45.362 { 00:12:45.362 "method": "bdev_wait_for_examine" 00:12:45.362 } 00:12:45.362 ] 00:12:45.362 } 00:12:45.362 ] 00:12:45.362 } 00:12:45.362 [2024-11-29 14:20:26.932319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:45.363 [2024-11-29 14:20:26.932436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80854 ] 00:12:45.363 [2024-11-29 14:20:27.076308] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.363 [2024-11-29 14:20:27.107631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.749  [2024-11-29T14:20:29.485Z] Copying: 309/1024 [MB] (309 MBps) [2024-11-29T14:20:30.427Z] Copying: 620/1024 [MB] (311 MBps) [2024-11-29T14:20:30.688Z] Copying: 931/1024 [MB] (310 MBps) [2024-11-29T14:20:31.258Z] Copying: 1024/1024 [MB] (average 310 MBps) 00:12:49.464 00:12:49.464 14:20:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:49.464 14:20:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:49.464 14:20:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:49.464 14:20:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:49.464 14:20:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:49.464 14:20:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:49.464 { 00:12:49.464 "subsystems": [ 00:12:49.464 { 00:12:49.464 "subsystem": "bdev", 00:12:49.464 "config": [ 00:12:49.464 { 00:12:49.464 "params": { 00:12:49.464 "block_size": 512, 00:12:49.464 "num_blocks": 2097152, 00:12:49.464 "name": "malloc0" 00:12:49.464 }, 00:12:49.464 "method": "bdev_malloc_create" 00:12:49.464 }, 00:12:49.464 { 00:12:49.464 "params": { 00:12:49.464 "io_mechanism": "io_uring", 00:12:49.464 "filename": "/dev/nullb0", 00:12:49.464 "name": "null0" 00:12:49.464 }, 00:12:49.464 "method": "bdev_xnvme_create" 00:12:49.464 }, 00:12:49.464 { 00:12:49.464 "method": "bdev_wait_for_examine" 00:12:49.464 } 00:12:49.464 ] 00:12:49.464 } 00:12:49.464 ] 00:12:49.464 } 00:12:49.464 [2024-11-29 14:20:31.025732] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:49.464 [2024-11-29 14:20:31.025973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80908 ] 00:12:49.464 [2024-11-29 14:20:31.172798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.464 [2024-11-29 14:20:31.210978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.850  [2024-11-29T14:20:33.588Z] Copying: 316/1024 [MB] (316 MBps) [2024-11-29T14:20:34.531Z] Copying: 634/1024 [MB] (317 MBps) [2024-11-29T14:20:34.793Z] Copying: 952/1024 [MB] (317 MBps) [2024-11-29T14:20:35.054Z] Copying: 1024/1024 [MB] (average 317 MBps) 00:12:53.260 00:12:53.260 14:20:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:53.260 14:20:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:53.260 14:20:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:53.260 14:20:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:53.260 { 00:12:53.260 "subsystems": [ 00:12:53.260 { 00:12:53.260 "subsystem": "bdev", 00:12:53.260 "config": [ 00:12:53.260 { 00:12:53.260 "params": { 00:12:53.260 "block_size": 512, 00:12:53.260 "num_blocks": 2097152, 00:12:53.260 "name": "malloc0" 00:12:53.260 }, 00:12:53.260 "method": "bdev_malloc_create" 00:12:53.260 }, 00:12:53.260 { 00:12:53.260 "params": { 00:12:53.260 "io_mechanism": "io_uring", 00:12:53.260 "filename": "/dev/nullb0", 00:12:53.260 "name": "null0" 00:12:53.260 }, 00:12:53.260 "method": "bdev_xnvme_create" 00:12:53.260 }, 00:12:53.260 { 00:12:53.260 "method": "bdev_wait_for_examine" 00:12:53.260 } 00:12:53.260 ] 00:12:53.260 } 00:12:53.260 ] 00:12:53.260 } 00:12:53.260 [2024-11-29 14:20:35.038173] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:53.260 [2024-11-29 14:20:35.038404] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80963 ] 00:12:53.519 [2024-11-29 14:20:35.185159] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.519 [2024-11-29 14:20:35.220659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.907  [2024-11-29T14:20:37.644Z] Copying: 322/1024 [MB] (322 MBps) [2024-11-29T14:20:38.588Z] Copying: 646/1024 [MB] (323 MBps) [2024-11-29T14:20:38.849Z] Copying: 969/1024 [MB] (323 MBps) [2024-11-29T14:20:39.110Z] Copying: 1024/1024 [MB] (average 323 MBps) 00:12:57.316 00:12:57.316 14:20:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:57.316 14:20:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:57.316 00:12:57.316 real 0m17.036s 00:12:57.316 user 0m14.020s 00:12:57.316 sys 0m2.499s 00:12:57.316 ************************************ 00:12:57.316 END TEST xnvme_to_malloc_dd_copy 00:12:57.316 ************************************ 00:12:57.316 14:20:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:57.316 14:20:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:57.316 14:20:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:57.316 14:20:39 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:57.316 14:20:39 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:57.316 14:20:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.316 ************************************ 00:12:57.316 START TEST xnvme_bdevperf 00:12:57.316 ************************************ 00:12:57.316 14:20:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:57.316 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:57.316 14:20:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:57.316 14:20:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:57.316 14:20:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:57.316 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:57.317 14:20:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:57.317 { 00:12:57.317 "subsystems": [ 00:12:57.317 { 00:12:57.317 "subsystem": "bdev", 00:12:57.317 "config": [ 00:12:57.317 { 00:12:57.317 "params": { 00:12:57.317 "io_mechanism": "libaio", 00:12:57.317 "filename": "/dev/nullb0", 00:12:57.317 "name": "null0" 00:12:57.317 }, 00:12:57.317 "method": "bdev_xnvme_create" 00:12:57.317 }, 00:12:57.317 { 00:12:57.317 "method": "bdev_wait_for_examine" 00:12:57.317 } 00:12:57.317 ] 00:12:57.317 } 00:12:57.317 ] 00:12:57.317 } 00:12:57.317 [2024-11-29 14:20:39.104327] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:57.317 [2024-11-29 14:20:39.104434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81042 ] 00:12:57.577 [2024-11-29 14:20:39.252949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.577 [2024-11-29 14:20:39.297962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.839 Running I/O for 5 seconds... 00:12:59.726 155584.00 IOPS, 607.75 MiB/s [2024-11-29T14:20:42.464Z] 171616.00 IOPS, 670.38 MiB/s [2024-11-29T14:20:43.408Z] 184448.00 IOPS, 720.50 MiB/s [2024-11-29T14:20:44.794Z] 190928.00 IOPS, 745.81 MiB/s 00:13:03.000 Latency(us) 00:13:03.000 [2024-11-29T14:20:44.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.000 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:03.000 null0 : 5.00 194737.73 760.69 0.00 0.00 326.51 106.34 2041.70 00:13:03.000 [2024-11-29T14:20:44.794Z] =================================================================================================================== 00:13:03.000 [2024-11-29T14:20:44.794Z] Total : 194737.73 760.69 0.00 0.00 326.51 106.34 2041.70 00:13:03.000 14:20:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:03.000 14:20:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:03.000 14:20:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:03.000 14:20:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:03.000 14:20:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:03.000 14:20:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:03.000 { 00:13:03.000 "subsystems": [ 00:13:03.000 { 00:13:03.000 "subsystem": "bdev", 00:13:03.000 "config": [ 00:13:03.000 { 00:13:03.000 "params": { 00:13:03.000 "io_mechanism": "io_uring", 00:13:03.000 "filename": "/dev/nullb0", 00:13:03.000 "name": "null0" 00:13:03.000 }, 00:13:03.000 "method": "bdev_xnvme_create" 00:13:03.000 }, 00:13:03.000 { 00:13:03.000 "method": "bdev_wait_for_examine" 00:13:03.000 } 00:13:03.000 ] 00:13:03.000 } 00:13:03.000 ] 00:13:03.000 } 00:13:03.000 [2024-11-29 14:20:44.617618] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:03.000 [2024-11-29 14:20:44.617735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81106 ] 00:13:03.000 [2024-11-29 14:20:44.765097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.260 [2024-11-29 14:20:44.801967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.260 Running I/O for 5 seconds... 00:13:05.150 238528.00 IOPS, 931.75 MiB/s [2024-11-29T14:20:47.922Z] 238272.00 IOPS, 930.75 MiB/s [2024-11-29T14:20:48.886Z] 238293.33 IOPS, 930.83 MiB/s [2024-11-29T14:20:50.275Z] 238304.00 IOPS, 930.88 MiB/s [2024-11-29T14:20:50.275Z] 238297.60 IOPS, 930.85 MiB/s 00:13:08.481 Latency(us) 00:13:08.481 [2024-11-29T14:20:50.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.481 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:08.481 null0 : 5.00 238228.07 930.58 0.00 0.00 266.20 253.64 1474.56 00:13:08.481 [2024-11-29T14:20:50.275Z] =================================================================================================================== 00:13:08.481 [2024-11-29T14:20:50.275Z] Total : 238228.07 930.58 0.00 0.00 266.20 253.64 1474.56 00:13:08.481 14:20:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:08.481 14:20:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:08.481 00:13:08.481 real 0m11.038s 00:13:08.481 user 0m8.646s 00:13:08.481 sys 0m2.152s 00:13:08.481 14:20:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.481 14:20:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:08.481 ************************************ 00:13:08.481 END TEST xnvme_bdevperf 00:13:08.481 ************************************ 00:13:08.481 ************************************ 00:13:08.481 END TEST nvme_xnvme 00:13:08.481 ************************************ 00:13:08.481 00:13:08.481 real 0m28.341s 00:13:08.481 user 0m22.782s 00:13:08.481 sys 0m4.770s 00:13:08.481 14:20:50 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.481 14:20:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.481 14:20:50 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:08.481 14:20:50 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:08.481 14:20:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:08.481 14:20:50 -- common/autotest_common.sh@10 -- # set +x 00:13:08.481 ************************************ 00:13:08.481 START TEST blockdev_xnvme 00:13:08.481 ************************************ 00:13:08.481 14:20:50 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:08.481 * Looking for test storage... 00:13:08.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:08.481 14:20:50 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:08.481 14:20:50 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:08.481 14:20:50 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:08.743 14:20:50 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:08.743 14:20:50 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:08.743 14:20:50 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:08.743 14:20:50 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:08.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.743 --rc genhtml_branch_coverage=1 00:13:08.743 --rc genhtml_function_coverage=1 00:13:08.743 --rc genhtml_legend=1 00:13:08.743 --rc geninfo_all_blocks=1 00:13:08.743 --rc geninfo_unexecuted_blocks=1 00:13:08.743 00:13:08.743 ' 00:13:08.743 14:20:50 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:08.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.743 --rc genhtml_branch_coverage=1 00:13:08.743 --rc genhtml_function_coverage=1 00:13:08.743 --rc genhtml_legend=1 00:13:08.743 --rc geninfo_all_blocks=1 00:13:08.743 --rc geninfo_unexecuted_blocks=1 00:13:08.743 00:13:08.743 ' 00:13:08.743 14:20:50 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:08.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.743 --rc genhtml_branch_coverage=1 00:13:08.743 --rc genhtml_function_coverage=1 00:13:08.743 --rc genhtml_legend=1 00:13:08.743 --rc geninfo_all_blocks=1 00:13:08.743 --rc geninfo_unexecuted_blocks=1 00:13:08.743 00:13:08.743 ' 00:13:08.743 14:20:50 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:08.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.743 --rc genhtml_branch_coverage=1 00:13:08.743 --rc genhtml_function_coverage=1 00:13:08.743 --rc genhtml_legend=1 00:13:08.743 --rc geninfo_all_blocks=1 00:13:08.743 --rc geninfo_unexecuted_blocks=1 00:13:08.743 00:13:08.743 ' 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=81245 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 81245 00:13:08.743 14:20:50 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 81245 ']' 00:13:08.743 14:20:50 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.743 14:20:50 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:08.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.743 14:20:50 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.743 14:20:50 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:08.743 14:20:50 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:08.743 14:20:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.743 [2024-11-29 14:20:50.388113] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:08.743 [2024-11-29 14:20:50.388416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81245 ] 00:13:09.005 [2024-11-29 14:20:50.540797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.005 [2024-11-29 14:20:50.578703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.576 14:20:51 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:09.576 14:20:51 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:13:09.576 14:20:51 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:09.576 14:20:51 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:09.576 14:20:51 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:09.576 14:20:51 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:09.576 14:20:51 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:09.836 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:10.096 Waiting for block devices as requested 00:13:10.096 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:10.096 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:10.096 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:10.096 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.385 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:15.385 14:20:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:15.385 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:15.386 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:15.386 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:15.386 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:15.386 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:15.386 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:15.386 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:15.386 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:15.386 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:15.386 14:20:56 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.386 14:20:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.386 14:20:56 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:15.386 nvme0n1 00:13:15.386 nvme1n1 00:13:15.386 nvme2n1 00:13:15.386 nvme2n2 00:13:15.386 nvme2n3 00:13:15.386 nvme3n1 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c870cb9f-e7e5-4fee-a934-a51feef8ce2c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c870cb9f-e7e5-4fee-a934-a51feef8ce2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "48c37a27-24c3-42ae-a578-7a8ee4aa6b4b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "48c37a27-24c3-42ae-a578-7a8ee4aa6b4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2e603118-a5cd-41da-8df2-2133eb79a779"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2e603118-a5cd-41da-8df2-2133eb79a779",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "1b13b206-857a-4c66-988a-59ddceaa0b18"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1b13b206-857a-4c66-988a-59ddceaa0b18",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "59790cbc-733f-4e82-b233-29b828f01bda"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "59790cbc-733f-4e82-b233-29b828f01bda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "7815dc8b-a92a-4e26-9792-0613a8b49e1a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7815dc8b-a92a-4e26-9792-0613a8b49e1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:15.386 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 81245 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 81245 ']' 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 81245 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81245 00:13:15.386 killing process with pid 81245 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81245' 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 81245 00:13:15.386 14:20:57 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 81245 00:13:15.648 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:15.648 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:15.648 14:20:57 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:15.648 14:20:57 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.648 14:20:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.648 ************************************ 00:13:15.648 START TEST bdev_hello_world 00:13:15.648 ************************************ 00:13:15.648 14:20:57 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:15.909 [2024-11-29 14:20:57.469451] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:15.909 [2024-11-29 14:20:57.469719] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81595 ] 00:13:15.909 [2024-11-29 14:20:57.616636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.909 [2024-11-29 14:20:57.655112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.170 [2024-11-29 14:20:57.813226] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:16.170 [2024-11-29 14:20:57.813265] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:16.170 [2024-11-29 14:20:57.813279] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:16.170 [2024-11-29 14:20:57.814788] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:16.170 [2024-11-29 14:20:57.815069] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:16.170 [2024-11-29 14:20:57.815087] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:16.170 [2024-11-29 14:20:57.815472] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:16.170 00:13:16.170 [2024-11-29 14:20:57.815512] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:16.170 00:13:16.171 real 0m0.539s 00:13:16.171 user 0m0.275s 00:13:16.171 sys 0m0.156s 00:13:16.171 ************************************ 00:13:16.171 END TEST bdev_hello_world 00:13:16.171 ************************************ 00:13:16.171 14:20:57 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.171 14:20:57 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:16.432 14:20:57 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:16.432 14:20:57 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:16.432 14:20:57 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.432 14:20:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.432 ************************************ 00:13:16.432 START TEST bdev_bounds 00:13:16.432 ************************************ 00:13:16.432 14:20:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:16.432 Process bdevio pid: 81619 00:13:16.432 14:20:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81619 00:13:16.432 14:20:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:16.432 14:20:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81619' 00:13:16.432 14:20:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81619 00:13:16.432 14:20:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:16.432 14:20:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 81619 ']' 00:13:16.432 14:20:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.432 14:20:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.432 14:20:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.432 14:20:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.432 14:20:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:16.433 [2024-11-29 14:20:58.079000] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:16.433 [2024-11-29 14:20:58.079308] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81619 ] 00:13:16.433 [2024-11-29 14:20:58.225011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:16.693 [2024-11-29 14:20:58.267681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.693 [2024-11-29 14:20:58.267917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.693 [2024-11-29 14:20:58.267987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.264 14:20:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.264 14:20:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:17.264 14:20:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:17.264 I/O targets: 00:13:17.264 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:17.264 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:17.264 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:17.264 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:17.264 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:17.264 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:17.264 00:13:17.264 00:13:17.264 CUnit - A unit testing framework for C - Version 2.1-3 00:13:17.264 http://cunit.sourceforge.net/ 00:13:17.264 00:13:17.264 00:13:17.264 Suite: bdevio tests on: nvme3n1 00:13:17.264 Test: blockdev write read block ...passed 00:13:17.264 Test: blockdev write zeroes read block ...passed 00:13:17.264 Test: blockdev write zeroes read no split ...passed 00:13:17.264 Test: blockdev write zeroes read split ...passed 00:13:17.264 Test: blockdev write zeroes read split partial ...passed 00:13:17.264 Test: blockdev reset ...passed 00:13:17.264 Test: blockdev write read 8 blocks ...passed 00:13:17.264 Test: blockdev write read size > 128k ...passed 00:13:17.264 Test: blockdev write read invalid size ...passed 00:13:17.264 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.264 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.264 Test: blockdev write read max offset ...passed 00:13:17.264 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.264 Test: blockdev writev readv 8 blocks ...passed 00:13:17.264 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.264 Test: blockdev writev readv block ...passed 00:13:17.264 Test: blockdev writev readv size > 128k ...passed 00:13:17.264 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.264 Test: blockdev comparev and writev ...passed 00:13:17.264 Test: blockdev nvme passthru rw ...passed 00:13:17.264 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.264 Test: blockdev nvme admin passthru ...passed 00:13:17.264 Test: blockdev copy ...passed 00:13:17.264 Suite: bdevio tests on: nvme2n3 00:13:17.264 Test: blockdev write read block ...passed 00:13:17.264 Test: blockdev write zeroes read block ...passed 00:13:17.264 Test: blockdev write zeroes read no split ...passed 00:13:17.264 Test: blockdev write zeroes read split ...passed 00:13:17.264 Test: blockdev write zeroes read split partial ...passed 00:13:17.264 Test: blockdev reset ...passed 00:13:17.264 Test: blockdev write read 8 blocks ...passed 00:13:17.264 Test: blockdev write read size > 128k ...passed 00:13:17.264 Test: blockdev write read invalid size ...passed 00:13:17.264 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.264 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.264 Test: blockdev write read max offset ...passed 00:13:17.264 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.264 Test: blockdev writev readv 8 blocks ...passed 00:13:17.264 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.264 Test: blockdev writev readv block ...passed 00:13:17.264 Test: blockdev writev readv size > 128k ...passed 00:13:17.264 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.264 Test: blockdev comparev and writev ...passed 00:13:17.264 Test: blockdev nvme passthru rw ...passed 00:13:17.264 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.264 Test: blockdev nvme admin passthru ...passed 00:13:17.264 Test: blockdev copy ...passed 00:13:17.264 Suite: bdevio tests on: nvme2n2 00:13:17.264 Test: blockdev write read block ...passed 00:13:17.264 Test: blockdev write zeroes read block ...passed 00:13:17.264 Test: blockdev write zeroes read no split ...passed 00:13:17.526 Test: blockdev write zeroes read split ...passed 00:13:17.526 Test: blockdev write zeroes read split partial ...passed 00:13:17.526 Test: blockdev reset ...passed 00:13:17.526 Test: blockdev write read 8 blocks ...passed 00:13:17.526 Test: blockdev write read size > 128k ...passed 00:13:17.526 Test: blockdev write read invalid size ...passed 00:13:17.526 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.526 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.526 Test: blockdev write read max offset ...passed 00:13:17.526 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.526 Test: blockdev writev readv 8 blocks ...passed 00:13:17.526 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.526 Test: blockdev writev readv block ...passed 00:13:17.526 Test: blockdev writev readv size > 128k ...passed 00:13:17.526 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.526 Test: blockdev comparev and writev ...passed 00:13:17.526 Test: blockdev nvme passthru rw ...passed 00:13:17.526 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.526 Test: blockdev nvme admin passthru ...passed 00:13:17.526 Test: blockdev copy ...passed 00:13:17.526 Suite: bdevio tests on: nvme2n1 00:13:17.526 Test: blockdev write read block ...passed 00:13:17.526 Test: blockdev write zeroes read block ...passed 00:13:17.526 Test: blockdev write zeroes read no split ...passed 00:13:17.526 Test: blockdev write zeroes read split ...passed 00:13:17.526 Test: blockdev write zeroes read split partial ...passed 00:13:17.526 Test: blockdev reset ...passed 00:13:17.526 Test: blockdev write read 8 blocks ...passed 00:13:17.526 Test: blockdev write read size > 128k ...passed 00:13:17.526 Test: blockdev write read invalid size ...passed 00:13:17.526 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.526 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.526 Test: blockdev write read max offset ...passed 00:13:17.526 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.526 Test: blockdev writev readv 8 blocks ...passed 00:13:17.526 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.526 Test: blockdev writev readv block ...passed 00:13:17.526 Test: blockdev writev readv size > 128k ...passed 00:13:17.526 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.526 Test: blockdev comparev and writev ...passed 00:13:17.526 Test: blockdev nvme passthru rw ...passed 00:13:17.526 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.526 Test: blockdev nvme admin passthru ...passed 00:13:17.526 Test: blockdev copy ...passed 00:13:17.526 Suite: bdevio tests on: nvme1n1 00:13:17.526 Test: blockdev write read block ...passed 00:13:17.526 Test: blockdev write zeroes read block ...passed 00:13:17.526 Test: blockdev write zeroes read no split ...passed 00:13:17.526 Test: blockdev write zeroes read split ...passed 00:13:17.526 Test: blockdev write zeroes read split partial ...passed 00:13:17.526 Test: blockdev reset ...passed 00:13:17.526 Test: blockdev write read 8 blocks ...passed 00:13:17.526 Test: blockdev write read size > 128k ...passed 00:13:17.526 Test: blockdev write read invalid size ...passed 00:13:17.526 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.526 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.526 Test: blockdev write read max offset ...passed 00:13:17.526 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.526 Test: blockdev writev readv 8 blocks ...passed 00:13:17.526 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.526 Test: blockdev writev readv block ...passed 00:13:17.526 Test: blockdev writev readv size > 128k ...passed 00:13:17.526 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.526 Test: blockdev comparev and writev ...passed 00:13:17.526 Test: blockdev nvme passthru rw ...passed 00:13:17.526 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.526 Test: blockdev nvme admin passthru ...passed 00:13:17.526 Test: blockdev copy ...passed 00:13:17.526 Suite: bdevio tests on: nvme0n1 00:13:17.526 Test: blockdev write read block ...passed 00:13:17.526 Test: blockdev write zeroes read block ...passed 00:13:17.526 Test: blockdev write zeroes read no split ...passed 00:13:17.526 Test: blockdev write zeroes read split ...passed 00:13:17.526 Test: blockdev write zeroes read split partial ...passed 00:13:17.526 Test: blockdev reset ...passed 00:13:17.526 Test: blockdev write read 8 blocks ...passed 00:13:17.526 Test: blockdev write read size > 128k ...passed 00:13:17.526 Test: blockdev write read invalid size ...passed 00:13:17.526 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.526 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.526 Test: blockdev write read max offset ...passed 00:13:17.526 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.526 Test: blockdev writev readv 8 blocks ...passed 00:13:17.526 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.526 Test: blockdev writev readv block ...passed 00:13:17.526 Test: blockdev writev readv size > 128k ...passed 00:13:17.526 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.526 Test: blockdev comparev and writev ...passed 00:13:17.526 Test: blockdev nvme passthru rw ...passed 00:13:17.526 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.526 Test: blockdev nvme admin passthru ...passed 00:13:17.526 Test: blockdev copy ...passed 00:13:17.526 00:13:17.526 Run Summary: Type Total Ran Passed Failed Inactive 00:13:17.526 suites 6 6 n/a 0 0 00:13:17.526 tests 138 138 138 0 0 00:13:17.526 asserts 780 780 780 0 n/a 00:13:17.526 00:13:17.526 Elapsed time = 0.290 seconds 00:13:17.526 0 00:13:17.526 14:20:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81619 00:13:17.526 14:20:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 81619 ']' 00:13:17.526 14:20:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 81619 00:13:17.526 14:20:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:17.526 14:20:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:17.526 14:20:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81619 00:13:17.526 14:20:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:17.526 14:20:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:17.526 14:20:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81619' 00:13:17.526 killing process with pid 81619 00:13:17.526 14:20:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 81619 00:13:17.526 14:20:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 81619 00:13:17.526 14:20:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:17.526 00:13:17.526 real 0m1.305s 00:13:17.526 user 0m3.237s 00:13:17.526 sys 0m0.272s 00:13:17.526 14:20:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:17.526 14:20:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:17.526 ************************************ 00:13:17.526 END TEST bdev_bounds 00:13:17.526 ************************************ 00:13:17.788 14:20:59 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:17.788 14:20:59 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:17.788 14:20:59 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:17.788 14:20:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.788 ************************************ 00:13:17.788 START TEST bdev_nbd 00:13:17.788 ************************************ 00:13:17.788 14:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:17.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81671 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81671 /var/tmp/spdk-nbd.sock 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 81671 ']' 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:17.789 14:20:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:17.789 [2024-11-29 14:20:59.436975] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:17.789 [2024-11-29 14:20:59.437097] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.050 [2024-11-29 14:20:59.586896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.050 [2024-11-29 14:20:59.624070] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:18.622 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.884 1+0 records in 00:13:18.884 1+0 records out 00:13:18.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730742 s, 5.6 MB/s 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:18.884 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:19.146 1+0 records in 00:13:19.146 1+0 records out 00:13:19.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0014809 s, 2.8 MB/s 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:19.146 14:21:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:19.407 1+0 records in 00:13:19.407 1+0 records out 00:13:19.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103865 s, 3.9 MB/s 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:19.407 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:19.669 1+0 records in 00:13:19.669 1+0 records out 00:13:19.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000916607 s, 4.5 MB/s 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:19.669 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:19.930 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:19.930 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:19.930 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:19.931 1+0 records in 00:13:19.931 1+0 records out 00:13:19.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129219 s, 3.2 MB/s 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:19.931 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.190 1+0 records in 00:13:20.190 1+0 records out 00:13:20.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103481 s, 4.0 MB/s 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:20.190 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:20.450 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:20.450 { 00:13:20.450 "nbd_device": "/dev/nbd0", 00:13:20.450 "bdev_name": "nvme0n1" 00:13:20.450 }, 00:13:20.450 { 00:13:20.450 "nbd_device": "/dev/nbd1", 00:13:20.450 "bdev_name": "nvme1n1" 00:13:20.450 }, 00:13:20.450 { 00:13:20.450 "nbd_device": "/dev/nbd2", 00:13:20.450 "bdev_name": "nvme2n1" 00:13:20.450 }, 00:13:20.450 { 00:13:20.450 "nbd_device": "/dev/nbd3", 00:13:20.450 "bdev_name": "nvme2n2" 00:13:20.450 }, 00:13:20.450 { 00:13:20.450 "nbd_device": "/dev/nbd4", 00:13:20.450 "bdev_name": "nvme2n3" 00:13:20.450 }, 00:13:20.450 { 00:13:20.450 "nbd_device": "/dev/nbd5", 00:13:20.450 "bdev_name": "nvme3n1" 00:13:20.450 } 00:13:20.450 ]' 00:13:20.450 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:20.450 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:20.450 { 00:13:20.450 "nbd_device": "/dev/nbd0", 00:13:20.450 "bdev_name": "nvme0n1" 00:13:20.450 }, 00:13:20.450 { 00:13:20.450 "nbd_device": "/dev/nbd1", 00:13:20.450 "bdev_name": "nvme1n1" 00:13:20.450 }, 00:13:20.450 { 00:13:20.450 "nbd_device": "/dev/nbd2", 00:13:20.450 "bdev_name": "nvme2n1" 00:13:20.450 }, 00:13:20.450 { 00:13:20.450 "nbd_device": "/dev/nbd3", 00:13:20.450 "bdev_name": "nvme2n2" 00:13:20.450 }, 00:13:20.450 { 00:13:20.450 "nbd_device": "/dev/nbd4", 00:13:20.450 "bdev_name": "nvme2n3" 00:13:20.450 }, 00:13:20.450 { 00:13:20.450 "nbd_device": "/dev/nbd5", 00:13:20.450 "bdev_name": "nvme3n1" 00:13:20.450 } 00:13:20.450 ]' 00:13:20.450 14:21:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.451 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:20.711 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:20.711 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:20.711 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:20.711 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.711 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.711 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:20.711 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:20.711 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.711 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.711 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:20.971 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:20.971 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:20.971 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:20.971 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.971 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.971 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:20.971 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:20.971 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.971 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.971 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:21.232 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:21.232 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:21.232 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:21.232 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.232 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.232 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:21.232 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:21.232 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.232 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:21.232 14:21:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:21.493 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:21.754 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:21.755 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:21.755 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:21.755 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:21.755 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:21.755 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:21.755 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:21.755 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:21.755 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:21.755 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:21.755 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:22.017 /dev/nbd0 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.017 1+0 records in 00:13:22.017 1+0 records out 00:13:22.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441373 s, 9.3 MB/s 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:22.017 14:21:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:22.278 /dev/nbd1 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.278 1+0 records in 00:13:22.278 1+0 records out 00:13:22.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381249 s, 10.7 MB/s 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:22.278 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:22.539 /dev/nbd10 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.539 1+0 records in 00:13:22.539 1+0 records out 00:13:22.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395579 s, 10.4 MB/s 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:22.539 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:22.800 /dev/nbd11 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.800 1+0 records in 00:13:22.800 1+0 records out 00:13:22.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053231 s, 7.7 MB/s 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:22.800 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:23.061 /dev/nbd12 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.061 1+0 records in 00:13:23.061 1+0 records out 00:13:23.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403025 s, 10.2 MB/s 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:23.061 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:23.322 /dev/nbd13 00:13:23.322 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:23.322 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:23.322 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:23.322 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:23.322 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:23.322 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:23.322 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:23.322 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:23.322 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:23.322 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:23.322 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.323 1+0 records in 00:13:23.323 1+0 records out 00:13:23.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542518 s, 7.5 MB/s 00:13:23.323 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.323 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:23.323 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.323 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:23.323 14:21:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:23.323 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.323 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:23.323 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:23.323 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.323 14:21:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:23.582 { 00:13:23.582 "nbd_device": "/dev/nbd0", 00:13:23.582 "bdev_name": "nvme0n1" 00:13:23.582 }, 00:13:23.582 { 00:13:23.582 "nbd_device": "/dev/nbd1", 00:13:23.582 "bdev_name": "nvme1n1" 00:13:23.582 }, 00:13:23.582 { 00:13:23.582 "nbd_device": "/dev/nbd10", 00:13:23.582 "bdev_name": "nvme2n1" 00:13:23.582 }, 00:13:23.582 { 00:13:23.582 "nbd_device": "/dev/nbd11", 00:13:23.582 "bdev_name": "nvme2n2" 00:13:23.582 }, 00:13:23.582 { 00:13:23.582 "nbd_device": "/dev/nbd12", 00:13:23.582 "bdev_name": "nvme2n3" 00:13:23.582 }, 00:13:23.582 { 00:13:23.582 "nbd_device": "/dev/nbd13", 00:13:23.582 "bdev_name": "nvme3n1" 00:13:23.582 } 00:13:23.582 ]' 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:23.582 { 00:13:23.582 "nbd_device": "/dev/nbd0", 00:13:23.582 "bdev_name": "nvme0n1" 00:13:23.582 }, 00:13:23.582 { 00:13:23.582 "nbd_device": "/dev/nbd1", 00:13:23.582 "bdev_name": "nvme1n1" 00:13:23.582 }, 00:13:23.582 { 00:13:23.582 "nbd_device": "/dev/nbd10", 00:13:23.582 "bdev_name": "nvme2n1" 00:13:23.582 }, 00:13:23.582 { 00:13:23.582 "nbd_device": "/dev/nbd11", 00:13:23.582 "bdev_name": "nvme2n2" 00:13:23.582 }, 00:13:23.582 { 00:13:23.582 "nbd_device": "/dev/nbd12", 00:13:23.582 "bdev_name": "nvme2n3" 00:13:23.582 }, 00:13:23.582 { 00:13:23.582 "nbd_device": "/dev/nbd13", 00:13:23.582 "bdev_name": "nvme3n1" 00:13:23.582 } 00:13:23.582 ]' 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:23.582 /dev/nbd1 00:13:23.582 /dev/nbd10 00:13:23.582 /dev/nbd11 00:13:23.582 /dev/nbd12 00:13:23.582 /dev/nbd13' 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:23.582 /dev/nbd1 00:13:23.582 /dev/nbd10 00:13:23.582 /dev/nbd11 00:13:23.582 /dev/nbd12 00:13:23.582 /dev/nbd13' 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:23.582 256+0 records in 00:13:23.582 256+0 records out 00:13:23.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00632634 s, 166 MB/s 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:23.582 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:23.582 256+0 records in 00:13:23.582 256+0 records out 00:13:23.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0636496 s, 16.5 MB/s 00:13:23.583 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:23.583 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:23.583 256+0 records in 00:13:23.583 256+0 records out 00:13:23.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0745432 s, 14.1 MB/s 00:13:23.583 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:23.583 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:23.583 256+0 records in 00:13:23.583 256+0 records out 00:13:23.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.057634 s, 18.2 MB/s 00:13:23.583 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:23.583 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:23.842 256+0 records in 00:13:23.842 256+0 records out 00:13:23.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0959181 s, 10.9 MB/s 00:13:23.842 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:23.842 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:23.842 256+0 records in 00:13:23.842 256+0 records out 00:13:23.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160078 s, 6.6 MB/s 00:13:23.842 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:23.842 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:24.104 256+0 records in 00:13:24.104 256+0 records out 00:13:24.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.220512 s, 4.8 MB/s 00:13:24.104 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:24.105 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:24.366 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:24.366 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:24.366 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:24.366 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:24.366 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:24.366 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:24.366 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.366 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:24.366 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.366 14:21:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:24.366 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:24.366 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:24.366 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:24.366 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.366 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.366 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:24.366 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.366 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.366 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.366 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:24.628 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:24.628 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:24.628 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:24.628 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.628 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.628 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:24.628 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.628 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.628 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.628 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:24.889 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:24.889 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:24.889 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:24.889 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.889 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.889 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:24.889 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.889 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.889 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.890 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:25.151 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:25.151 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:25.151 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:25.151 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.151 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.151 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:25.151 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.151 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.151 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.151 14:21:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:25.412 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:25.412 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:25.412 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:25.412 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.412 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.412 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:25.412 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.412 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.412 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.412 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:25.673 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:25.934 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:25.934 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:25.934 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:25.934 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:25.934 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.934 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:25.934 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:25.934 malloc_lvol_verify 00:13:25.934 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:26.195 c326ceb4-1971-4a05-83a1-2cfc74062439 00:13:26.195 14:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:26.457 486a6dfd-6694-4c6e-8cdf-ba974dccf49a 00:13:26.457 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:26.719 /dev/nbd0 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:26.719 mke2fs 1.47.0 (5-Feb-2023) 00:13:26.719 Discarding device blocks: 0/4096 done 00:13:26.719 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:26.719 00:13:26.719 Allocating group tables: 0/1 done 00:13:26.719 Writing inode tables: 0/1 done 00:13:26.719 Creating journal (1024 blocks): done 00:13:26.719 Writing superblocks and filesystem accounting information: 0/1 done 00:13:26.719 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81671 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 81671 ']' 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 81671 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81671 00:13:26.719 killing process with pid 81671 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81671' 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 81671 00:13:26.719 14:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 81671 00:13:26.981 ************************************ 00:13:26.981 END TEST bdev_nbd 00:13:26.981 ************************************ 00:13:26.981 14:21:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:26.981 00:13:26.981 real 0m9.280s 00:13:26.981 user 0m13.322s 00:13:26.981 sys 0m3.221s 00:13:26.981 14:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.981 14:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:26.981 14:21:08 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:26.981 14:21:08 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:26.981 14:21:08 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:26.981 14:21:08 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:26.981 14:21:08 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:26.981 14:21:08 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.981 14:21:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:26.981 ************************************ 00:13:26.981 START TEST bdev_fio 00:13:26.981 ************************************ 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:26.981 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:26.981 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.982 14:21:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:27.244 ************************************ 00:13:27.244 START TEST bdev_fio_rw_verify 00:13:27.244 ************************************ 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:27.244 14:21:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:27.244 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.244 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.244 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.244 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.244 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.244 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:27.244 fio-3.35 00:13:27.244 Starting 6 threads 00:13:39.483 00:13:39.483 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=82055: Fri Nov 29 14:21:19 2024 00:13:39.483 read: IOPS=12.8k, BW=49.8MiB/s (52.3MB/s)(499MiB/10002msec) 00:13:39.483 slat (usec): min=2, max=2792, avg= 6.99, stdev=22.87 00:13:39.483 clat (usec): min=83, max=10215, avg=1497.38, stdev=841.82 00:13:39.483 lat (usec): min=87, max=10231, avg=1504.37, stdev=842.48 00:13:39.483 clat percentiles (usec): 00:13:39.483 | 50.000th=[ 1385], 99.000th=[ 4080], 99.900th=[ 5604], 99.990th=[ 8455], 00:13:39.483 | 99.999th=[10159] 00:13:39.483 write: IOPS=13.0k, BW=50.8MiB/s (53.2MB/s)(508MiB/10002msec); 0 zone resets 00:13:39.483 slat (usec): min=10, max=5112, avg=46.05, stdev=163.81 00:13:39.483 clat (usec): min=91, max=11935, avg=1874.83, stdev=1060.76 00:13:39.483 lat (usec): min=105, max=11964, avg=1920.88, stdev=1073.06 00:13:39.483 clat percentiles (usec): 00:13:39.483 | 50.000th=[ 1696], 99.000th=[ 5538], 99.900th=[ 8455], 99.990th=[10159], 00:13:39.483 | 99.999th=[10945] 00:13:39.483 bw ( KiB/s): min=41940, max=87769, per=100.00%, avg=52322.21, stdev=1761.25, samples=114 00:13:39.483 iops : min=10483, max=21942, avg=13080.00, stdev=440.33, samples=114 00:13:39.483 lat (usec) : 100=0.01%, 250=1.61%, 500=5.45%, 750=7.03%, 1000=9.89% 00:13:39.483 lat (msec) : 2=45.95%, 4=27.45%, 10=2.60%, 20=0.01% 00:13:39.483 cpu : usr=43.47%, sys=32.33%, ctx=4637, majf=0, minf=13538 00:13:39.483 IO depths : 1=11.0%, 2=23.3%, 4=51.5%, 8=14.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.483 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.483 issued rwts: total=127619,129956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.483 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:39.483 00:13:39.483 Run status group 0 (all jobs): 00:13:39.483 READ: bw=49.8MiB/s (52.3MB/s), 49.8MiB/s-49.8MiB/s (52.3MB/s-52.3MB/s), io=499MiB (523MB), run=10002-10002msec 00:13:39.483 WRITE: bw=50.8MiB/s (53.2MB/s), 50.8MiB/s-50.8MiB/s (53.2MB/s-53.2MB/s), io=508MiB (532MB), run=10002-10002msec 00:13:39.483 ----------------------------------------------------- 00:13:39.483 Suppressions used: 00:13:39.483 count bytes template 00:13:39.483 6 48 /usr/src/fio/parse.c 00:13:39.483 2252 216192 /usr/src/fio/iolog.c 00:13:39.483 1 8 libtcmalloc_minimal.so 00:13:39.483 1 904 libcrypto.so 00:13:39.483 ----------------------------------------------------- 00:13:39.483 00:13:39.483 00:13:39.483 real 0m11.143s 00:13:39.483 user 0m26.786s 00:13:39.483 sys 0m19.705s 00:13:39.483 ************************************ 00:13:39.483 END TEST bdev_fio_rw_verify 00:13:39.484 ************************************ 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:39.484 14:21:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c870cb9f-e7e5-4fee-a934-a51feef8ce2c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c870cb9f-e7e5-4fee-a934-a51feef8ce2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "48c37a27-24c3-42ae-a578-7a8ee4aa6b4b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "48c37a27-24c3-42ae-a578-7a8ee4aa6b4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2e603118-a5cd-41da-8df2-2133eb79a779"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2e603118-a5cd-41da-8df2-2133eb79a779",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "1b13b206-857a-4c66-988a-59ddceaa0b18"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1b13b206-857a-4c66-988a-59ddceaa0b18",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "59790cbc-733f-4e82-b233-29b828f01bda"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "59790cbc-733f-4e82-b233-29b828f01bda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "7815dc8b-a92a-4e26-9792-0613a8b49e1a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7815dc8b-a92a-4e26-9792-0613a8b49e1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:39.484 14:21:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:39.484 14:21:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:39.484 /home/vagrant/spdk_repo/spdk 00:13:39.484 14:21:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:39.484 14:21:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:39.484 14:21:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:39.484 00:13:39.484 real 0m11.316s 00:13:39.484 user 0m26.858s 00:13:39.484 sys 0m19.787s 00:13:39.484 14:21:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.484 ************************************ 00:13:39.484 END TEST bdev_fio 00:13:39.484 ************************************ 00:13:39.484 14:21:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:39.484 14:21:20 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:39.484 14:21:20 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:39.484 14:21:20 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:39.484 14:21:20 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.484 14:21:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.484 ************************************ 00:13:39.484 START TEST bdev_verify 00:13:39.484 ************************************ 00:13:39.484 14:21:20 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:39.484 [2024-11-29 14:21:20.170916] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:39.484 [2024-11-29 14:21:20.171059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82226 ] 00:13:39.484 [2024-11-29 14:21:20.321348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:39.484 [2024-11-29 14:21:20.375407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.484 [2024-11-29 14:21:20.375513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.484 Running I/O for 5 seconds... 00:13:41.002 24259.00 IOPS, 94.76 MiB/s [2024-11-29T14:21:24.182Z] 25156.00 IOPS, 98.27 MiB/s [2024-11-29T14:21:25.163Z] 24652.67 IOPS, 96.30 MiB/s [2024-11-29T14:21:25.734Z] 24177.75 IOPS, 94.44 MiB/s 00:13:43.940 Latency(us) 00:13:43.940 [2024-11-29T14:21:25.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.940 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.940 Verification LBA range: start 0x0 length 0xa0000 00:13:43.940 nvme0n1 : 5.03 1832.54 7.16 0.00 0.00 69721.03 8065.97 70980.53 00:13:43.940 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.940 Verification LBA range: start 0xa0000 length 0xa0000 00:13:43.940 nvme0n1 : 5.01 1890.86 7.39 0.00 0.00 67568.24 10637.00 62511.26 00:13:43.940 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.940 Verification LBA range: start 0x0 length 0xbd0bd 00:13:43.940 nvme1n1 : 5.06 2326.43 9.09 0.00 0.00 54785.64 4360.66 91145.45 00:13:43.940 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.940 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:43.940 nvme1n1 : 5.04 2435.69 9.51 0.00 0.00 52267.31 5898.24 106470.79 00:13:43.940 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.940 Verification LBA range: start 0x0 length 0x80000 00:13:43.940 nvme2n1 : 5.06 1872.88 7.32 0.00 0.00 67720.72 12855.14 63721.16 00:13:43.940 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.940 Verification LBA range: start 0x80000 length 0x80000 00:13:43.940 nvme2n1 : 5.04 1931.12 7.54 0.00 0.00 65967.89 7309.78 60494.77 00:13:43.940 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.940 Verification LBA range: start 0x0 length 0x80000 00:13:43.940 nvme2n2 : 5.06 1846.98 7.21 0.00 0.00 68515.13 12754.31 60898.07 00:13:43.940 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.940 Verification LBA range: start 0x80000 length 0x80000 00:13:43.940 nvme2n2 : 5.05 1874.72 7.32 0.00 0.00 67749.73 9427.10 63721.16 00:13:43.940 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.940 Verification LBA range: start 0x0 length 0x80000 00:13:43.940 nvme2n3 : 5.06 1844.98 7.21 0.00 0.00 68492.27 7309.78 60494.77 00:13:43.940 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.940 Verification LBA range: start 0x80000 length 0x80000 00:13:43.940 nvme2n3 : 5.05 1874.09 7.32 0.00 0.00 67616.07 8973.39 66544.25 00:13:43.940 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.940 Verification LBA range: start 0x0 length 0x20000 00:13:43.940 nvme3n1 : 5.07 1869.01 7.30 0.00 0.00 67563.12 1751.83 60494.77 00:13:43.940 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.940 Verification LBA range: start 0x20000 length 0x20000 00:13:43.940 nvme3n1 : 5.06 1897.73 7.41 0.00 0.00 66644.66 2646.65 66544.25 00:13:43.940 [2024-11-29T14:21:25.734Z] =================================================================================================================== 00:13:43.940 [2024-11-29T14:21:25.734Z] Total : 23497.03 91.79 0.00 0.00 64854.94 1751.83 106470.79 00:13:44.202 00:13:44.202 real 0m5.815s 00:13:44.202 user 0m9.224s 00:13:44.202 sys 0m1.448s 00:13:44.202 14:21:25 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.202 ************************************ 00:13:44.202 END TEST bdev_verify 00:13:44.202 ************************************ 00:13:44.202 14:21:25 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:44.202 14:21:25 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:44.202 14:21:25 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:44.202 14:21:25 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.202 14:21:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.202 ************************************ 00:13:44.202 START TEST bdev_verify_big_io 00:13:44.202 ************************************ 00:13:44.202 14:21:25 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:44.464 [2024-11-29 14:21:26.052286] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:44.464 [2024-11-29 14:21:26.052427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82316 ] 00:13:44.464 [2024-11-29 14:21:26.204607] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:44.464 [2024-11-29 14:21:26.255670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.464 [2024-11-29 14:21:26.255784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.724 Running I/O for 5 seconds... 00:13:51.338 1104.00 IOPS, 69.00 MiB/s [2024-11-29T14:21:33.132Z] 3002.50 IOPS, 187.66 MiB/s 00:13:51.338 Latency(us) 00:13:51.338 [2024-11-29T14:21:33.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.338 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.338 Verification LBA range: start 0x0 length 0xa000 00:13:51.338 nvme0n1 : 5.96 77.82 4.86 0.00 0.00 1578847.42 314572.80 2606921.26 00:13:51.338 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.338 Verification LBA range: start 0xa000 length 0xa000 00:13:51.338 nvme0n1 : 6.00 103.48 6.47 0.00 0.00 1210421.18 9628.75 1690627.15 00:13:51.338 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.338 Verification LBA range: start 0x0 length 0xbd0b 00:13:51.338 nvme1n1 : 5.95 161.32 10.08 0.00 0.00 734248.43 14115.45 845313.58 00:13:51.338 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.338 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:51.338 nvme1n1 : 5.99 114.77 7.17 0.00 0.00 1061711.34 40733.14 1161499.57 00:13:51.338 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.338 Verification LBA range: start 0x0 length 0x8000 00:13:51.338 nvme2n1 : 5.97 175.58 10.97 0.00 0.00 669173.55 16232.76 751748.33 00:13:51.338 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.338 Verification LBA range: start 0x8000 length 0x8000 00:13:51.338 nvme2n1 : 5.99 104.23 6.51 0.00 0.00 1136004.19 24197.91 2000360.37 00:13:51.338 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.338 Verification LBA range: start 0x0 length 0x8000 00:13:51.338 nvme2n2 : 5.98 128.53 8.03 0.00 0.00 886440.60 12552.66 1006632.96 00:13:51.338 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.338 Verification LBA range: start 0x8000 length 0x8000 00:13:51.338 nvme2n2 : 6.00 114.72 7.17 0.00 0.00 999548.82 9729.58 1606741.07 00:13:51.338 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.338 Verification LBA range: start 0x0 length 0x8000 00:13:51.338 nvme2n3 : 5.98 144.47 9.03 0.00 0.00 764281.63 26819.35 1038896.84 00:13:51.338 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.338 Verification LBA range: start 0x8000 length 0x8000 00:13:51.338 nvme2n3 : 6.00 104.00 6.50 0.00 0.00 1067572.88 24197.91 1219574.55 00:13:51.338 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:51.338 Verification LBA range: start 0x0 length 0x2000 00:13:51.338 nvme3n1 : 5.99 149.64 9.35 0.00 0.00 719419.50 7612.26 884030.23 00:13:51.338 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:51.338 Verification LBA range: start 0x2000 length 0x2000 00:13:51.338 nvme3n1 : 6.00 114.58 7.16 0.00 0.00 937778.39 11443.59 1355082.83 00:13:51.338 [2024-11-29T14:21:33.132Z] =================================================================================================================== 00:13:51.338 [2024-11-29T14:21:33.132Z] Total : 1493.14 93.32 0.00 0.00 930765.18 7612.26 2606921.26 00:13:51.338 00:13:51.338 real 0m6.904s 00:13:51.338 user 0m12.529s 00:13:51.338 sys 0m0.510s 00:13:51.338 ************************************ 00:13:51.338 14:21:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:51.338 14:21:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.338 END TEST bdev_verify_big_io 00:13:51.338 ************************************ 00:13:51.338 14:21:32 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:51.338 14:21:32 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:51.338 14:21:32 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.338 14:21:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.338 ************************************ 00:13:51.338 START TEST bdev_write_zeroes 00:13:51.338 ************************************ 00:13:51.338 14:21:32 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:51.338 [2024-11-29 14:21:33.029538] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:51.338 [2024-11-29 14:21:33.029685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82414 ] 00:13:51.598 [2024-11-29 14:21:33.181802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.598 [2024-11-29 14:21:33.253287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.859 Running I/O for 1 seconds... 00:13:52.804 98752.00 IOPS, 385.75 MiB/s 00:13:52.804 Latency(us) 00:13:52.804 [2024-11-29T14:21:34.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.805 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.805 nvme0n1 : 1.02 16014.83 62.56 0.00 0.00 7982.68 5948.65 32667.18 00:13:52.805 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.805 nvme1n1 : 1.02 17183.12 67.12 0.00 0.00 7433.89 5696.59 26214.40 00:13:52.805 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.805 nvme2n1 : 1.03 15977.70 62.41 0.00 0.00 7927.70 4562.31 24903.68 00:13:52.805 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.805 nvme2n2 : 1.03 15959.68 62.34 0.00 0.00 7930.23 4713.55 25306.98 00:13:52.805 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.805 nvme2n3 : 1.03 15941.57 62.27 0.00 0.00 7931.72 4637.93 26819.35 00:13:52.805 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:52.805 nvme3n1 : 1.03 15923.63 62.20 0.00 0.00 7933.22 4663.14 27222.65 00:13:52.805 [2024-11-29T14:21:34.599Z] =================================================================================================================== 00:13:52.805 [2024-11-29T14:21:34.599Z] Total : 97000.53 378.91 0.00 0.00 7851.41 4562.31 32667.18 00:13:53.067 00:13:53.067 real 0m1.837s 00:13:53.067 user 0m1.109s 00:13:53.067 sys 0m0.544s 00:13:53.067 14:21:34 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.067 ************************************ 00:13:53.067 END TEST bdev_write_zeroes 00:13:53.067 ************************************ 00:13:53.067 14:21:34 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:53.067 14:21:34 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:53.067 14:21:34 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:53.067 14:21:34 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.067 14:21:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.328 ************************************ 00:13:53.328 START TEST bdev_json_nonenclosed 00:13:53.328 ************************************ 00:13:53.328 14:21:34 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:53.328 [2024-11-29 14:21:34.945966] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:53.328 [2024-11-29 14:21:34.946130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82455 ] 00:13:53.328 [2024-11-29 14:21:35.100302] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.588 [2024-11-29 14:21:35.150723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.588 [2024-11-29 14:21:35.150836] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:53.588 [2024-11-29 14:21:35.150856] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:53.588 [2024-11-29 14:21:35.150873] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:53.588 00:13:53.588 real 0m0.387s 00:13:53.588 user 0m0.163s 00:13:53.588 sys 0m0.118s 00:13:53.588 14:21:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.588 ************************************ 00:13:53.588 END TEST bdev_json_nonenclosed 00:13:53.588 ************************************ 00:13:53.588 14:21:35 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:53.588 14:21:35 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:53.588 14:21:35 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:53.588 14:21:35 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.588 14:21:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.588 ************************************ 00:13:53.588 START TEST bdev_json_nonarray 00:13:53.588 ************************************ 00:13:53.588 14:21:35 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:53.849 [2024-11-29 14:21:35.404211] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:53.849 [2024-11-29 14:21:35.404374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82486 ] 00:13:53.849 [2024-11-29 14:21:35.566179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.849 [2024-11-29 14:21:35.613917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.849 [2024-11-29 14:21:35.614037] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:53.849 [2024-11-29 14:21:35.614058] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:53.849 [2024-11-29 14:21:35.614071] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:54.110 00:13:54.110 real 0m0.389s 00:13:54.110 user 0m0.156s 00:13:54.110 sys 0m0.128s 00:13:54.110 14:21:35 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.110 ************************************ 00:13:54.110 END TEST bdev_json_nonarray 00:13:54.110 ************************************ 00:13:54.110 14:21:35 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:54.110 14:21:35 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:54.110 14:21:35 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:54.110 14:21:35 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:54.110 14:21:35 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:54.110 14:21:35 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:54.110 14:21:35 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:54.110 14:21:35 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:54.110 14:21:35 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:54.110 14:21:35 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:54.110 14:21:35 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:54.110 14:21:35 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:54.110 14:21:35 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:54.683 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:57.231 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:57.231 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:57.231 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:57.803 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:57.803 00:13:57.803 real 0m49.393s 00:13:57.803 user 1m14.938s 00:13:57.803 sys 0m32.892s 00:13:57.803 14:21:39 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:57.803 ************************************ 00:13:57.803 END TEST blockdev_xnvme 00:13:57.803 ************************************ 00:13:57.803 14:21:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.065 14:21:39 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:58.065 14:21:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:58.065 14:21:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.065 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:13:58.065 ************************************ 00:13:58.065 START TEST ublk 00:13:58.065 ************************************ 00:13:58.065 14:21:39 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:58.065 * Looking for test storage... 00:13:58.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:58.065 14:21:39 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:58.065 14:21:39 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:13:58.065 14:21:39 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:58.065 14:21:39 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:58.065 14:21:39 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:58.065 14:21:39 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:58.065 14:21:39 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:58.065 14:21:39 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.065 14:21:39 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:58.065 14:21:39 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:58.065 14:21:39 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:58.065 14:21:39 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:58.065 14:21:39 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:58.065 14:21:39 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:58.065 14:21:39 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:58.065 14:21:39 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:58.065 14:21:39 ublk -- scripts/common.sh@345 -- # : 1 00:13:58.065 14:21:39 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:58.065 14:21:39 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.065 14:21:39 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:58.065 14:21:39 ublk -- scripts/common.sh@353 -- # local d=1 00:13:58.065 14:21:39 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.065 14:21:39 ublk -- scripts/common.sh@355 -- # echo 1 00:13:58.065 14:21:39 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:58.065 14:21:39 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:58.065 14:21:39 ublk -- scripts/common.sh@353 -- # local d=2 00:13:58.065 14:21:39 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.065 14:21:39 ublk -- scripts/common.sh@355 -- # echo 2 00:13:58.065 14:21:39 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:58.065 14:21:39 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:58.065 14:21:39 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:58.065 14:21:39 ublk -- scripts/common.sh@368 -- # return 0 00:13:58.065 14:21:39 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.066 14:21:39 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:58.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.066 --rc genhtml_branch_coverage=1 00:13:58.066 --rc genhtml_function_coverage=1 00:13:58.066 --rc genhtml_legend=1 00:13:58.066 --rc geninfo_all_blocks=1 00:13:58.066 --rc geninfo_unexecuted_blocks=1 00:13:58.066 00:13:58.066 ' 00:13:58.066 14:21:39 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:58.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.066 --rc genhtml_branch_coverage=1 00:13:58.066 --rc genhtml_function_coverage=1 00:13:58.066 --rc genhtml_legend=1 00:13:58.066 --rc geninfo_all_blocks=1 00:13:58.066 --rc geninfo_unexecuted_blocks=1 00:13:58.066 00:13:58.066 ' 00:13:58.066 14:21:39 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:58.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.066 --rc genhtml_branch_coverage=1 00:13:58.066 --rc genhtml_function_coverage=1 00:13:58.066 --rc genhtml_legend=1 00:13:58.066 --rc geninfo_all_blocks=1 00:13:58.066 --rc geninfo_unexecuted_blocks=1 00:13:58.066 00:13:58.066 ' 00:13:58.066 14:21:39 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:58.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.066 --rc genhtml_branch_coverage=1 00:13:58.066 --rc genhtml_function_coverage=1 00:13:58.066 --rc genhtml_legend=1 00:13:58.066 --rc geninfo_all_blocks=1 00:13:58.066 --rc geninfo_unexecuted_blocks=1 00:13:58.066 00:13:58.066 ' 00:13:58.066 14:21:39 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:58.066 14:21:39 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:58.066 14:21:39 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:58.066 14:21:39 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:58.066 14:21:39 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:58.066 14:21:39 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:58.066 14:21:39 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:58.066 14:21:39 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:58.066 14:21:39 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:58.066 14:21:39 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:58.066 14:21:39 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:58.066 14:21:39 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:58.066 14:21:39 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:58.066 14:21:39 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:58.066 14:21:39 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:58.066 14:21:39 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:58.066 14:21:39 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:58.066 14:21:39 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:58.066 14:21:39 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:58.066 14:21:39 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:58.066 14:21:39 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:58.066 14:21:39 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.066 14:21:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.066 ************************************ 00:13:58.066 START TEST test_save_ublk_config 00:13:58.066 ************************************ 00:13:58.066 14:21:39 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:58.066 14:21:39 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:58.066 14:21:39 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82773 00:13:58.066 14:21:39 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:58.066 14:21:39 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:58.066 14:21:39 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82773 00:13:58.066 14:21:39 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82773 ']' 00:13:58.066 14:21:39 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.066 14:21:39 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:58.066 14:21:39 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.066 14:21:39 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:58.066 14:21:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:58.327 [2024-11-29 14:21:39.909561] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:58.327 [2024-11-29 14:21:39.909717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82773 ] 00:13:58.327 [2024-11-29 14:21:40.063624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.588 [2024-11-29 14:21:40.131974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.163 14:21:40 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:59.163 14:21:40 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:59.163 14:21:40 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:59.163 14:21:40 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:59.163 14:21:40 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.163 14:21:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:59.163 [2024-11-29 14:21:40.766523] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:59.163 [2024-11-29 14:21:40.766906] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:59.163 malloc0 00:13:59.163 [2024-11-29 14:21:40.798664] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:59.163 [2024-11-29 14:21:40.798779] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:59.163 [2024-11-29 14:21:40.798788] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:59.163 [2024-11-29 14:21:40.798804] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:59.163 [2024-11-29 14:21:40.807664] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:59.163 [2024-11-29 14:21:40.807705] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:59.163 [2024-11-29 14:21:40.814536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:59.163 [2024-11-29 14:21:40.814680] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:59.163 [2024-11-29 14:21:40.832525] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:59.163 0 00:13:59.163 14:21:40 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.163 14:21:40 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:59.163 14:21:40 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.163 14:21:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:59.425 14:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.425 14:21:41 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:59.425 "subsystems": [ 00:13:59.425 { 00:13:59.425 "subsystem": "fsdev", 00:13:59.425 "config": [ 00:13:59.425 { 00:13:59.425 "method": "fsdev_set_opts", 00:13:59.425 "params": { 00:13:59.425 "fsdev_io_pool_size": 65535, 00:13:59.425 "fsdev_io_cache_size": 256 00:13:59.425 } 00:13:59.425 } 00:13:59.425 ] 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "subsystem": "keyring", 00:13:59.425 "config": [] 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "subsystem": "iobuf", 00:13:59.425 "config": [ 00:13:59.425 { 00:13:59.425 "method": "iobuf_set_options", 00:13:59.425 "params": { 00:13:59.425 "small_pool_count": 8192, 00:13:59.425 "large_pool_count": 1024, 00:13:59.425 "small_bufsize": 8192, 00:13:59.425 "large_bufsize": 135168 00:13:59.425 } 00:13:59.425 } 00:13:59.425 ] 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "subsystem": "sock", 00:13:59.425 "config": [ 00:13:59.425 { 00:13:59.425 "method": "sock_set_default_impl", 00:13:59.425 "params": { 00:13:59.425 "impl_name": "posix" 00:13:59.425 } 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "method": "sock_impl_set_options", 00:13:59.425 "params": { 00:13:59.425 "impl_name": "ssl", 00:13:59.425 "recv_buf_size": 4096, 00:13:59.425 "send_buf_size": 4096, 00:13:59.425 "enable_recv_pipe": true, 00:13:59.425 "enable_quickack": false, 00:13:59.425 "enable_placement_id": 0, 00:13:59.425 "enable_zerocopy_send_server": true, 00:13:59.425 "enable_zerocopy_send_client": false, 00:13:59.425 "zerocopy_threshold": 0, 00:13:59.425 "tls_version": 0, 00:13:59.425 "enable_ktls": false 00:13:59.425 } 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "method": "sock_impl_set_options", 00:13:59.425 "params": { 00:13:59.425 "impl_name": "posix", 00:13:59.425 "recv_buf_size": 2097152, 00:13:59.425 "send_buf_size": 2097152, 00:13:59.425 "enable_recv_pipe": true, 00:13:59.425 "enable_quickack": false, 00:13:59.425 "enable_placement_id": 0, 00:13:59.425 "enable_zerocopy_send_server": true, 00:13:59.425 "enable_zerocopy_send_client": false, 00:13:59.425 "zerocopy_threshold": 0, 00:13:59.425 "tls_version": 0, 00:13:59.425 "enable_ktls": false 00:13:59.425 } 00:13:59.425 } 00:13:59.425 ] 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "subsystem": "vmd", 00:13:59.425 "config": [] 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "subsystem": "accel", 00:13:59.425 "config": [ 00:13:59.425 { 00:13:59.425 "method": "accel_set_options", 00:13:59.425 "params": { 00:13:59.425 "small_cache_size": 128, 00:13:59.425 "large_cache_size": 16, 00:13:59.425 "task_count": 2048, 00:13:59.425 "sequence_count": 2048, 00:13:59.425 "buf_count": 2048 00:13:59.425 } 00:13:59.425 } 00:13:59.425 ] 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "subsystem": "bdev", 00:13:59.425 "config": [ 00:13:59.425 { 00:13:59.425 "method": "bdev_set_options", 00:13:59.425 "params": { 00:13:59.425 "bdev_io_pool_size": 65535, 00:13:59.425 "bdev_io_cache_size": 256, 00:13:59.425 "bdev_auto_examine": true, 00:13:59.425 "iobuf_small_cache_size": 128, 00:13:59.425 "iobuf_large_cache_size": 16 00:13:59.425 } 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "method": "bdev_raid_set_options", 00:13:59.425 "params": { 00:13:59.425 "process_window_size_kb": 1024, 00:13:59.425 "process_max_bandwidth_mb_sec": 0 00:13:59.425 } 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "method": "bdev_iscsi_set_options", 00:13:59.425 "params": { 00:13:59.425 "timeout_sec": 30 00:13:59.425 } 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "method": "bdev_nvme_set_options", 00:13:59.425 "params": { 00:13:59.425 "action_on_timeout": "none", 00:13:59.425 "timeout_us": 0, 00:13:59.425 "timeout_admin_us": 0, 00:13:59.425 "keep_alive_timeout_ms": 10000, 00:13:59.425 "arbitration_burst": 0, 00:13:59.425 "low_priority_weight": 0, 00:13:59.425 "medium_priority_weight": 0, 00:13:59.425 "high_priority_weight": 0, 00:13:59.425 "nvme_adminq_poll_period_us": 10000, 00:13:59.425 "nvme_ioq_poll_period_us": 0, 00:13:59.425 "io_queue_requests": 0, 00:13:59.425 "delay_cmd_submit": true, 00:13:59.425 "transport_retry_count": 4, 00:13:59.425 "bdev_retry_count": 3, 00:13:59.425 "transport_ack_timeout": 0, 00:13:59.425 "ctrlr_loss_timeout_sec": 0, 00:13:59.425 "reconnect_delay_sec": 0, 00:13:59.425 "fast_io_fail_timeout_sec": 0, 00:13:59.425 "disable_auto_failback": false, 00:13:59.425 "generate_uuids": false, 00:13:59.425 "transport_tos": 0, 00:13:59.425 "nvme_error_stat": false, 00:13:59.425 "rdma_srq_size": 0, 00:13:59.425 "io_path_stat": false, 00:13:59.425 "allow_accel_sequence": false, 00:13:59.425 "rdma_max_cq_size": 0, 00:13:59.425 "rdma_cm_event_timeout_ms": 0, 00:13:59.425 "dhchap_digests": [ 00:13:59.425 "sha256", 00:13:59.425 "sha384", 00:13:59.425 "sha512" 00:13:59.425 ], 00:13:59.425 "dhchap_dhgroups": [ 00:13:59.425 "null", 00:13:59.425 "ffdhe2048", 00:13:59.425 "ffdhe3072", 00:13:59.425 "ffdhe4096", 00:13:59.425 "ffdhe6144", 00:13:59.425 "ffdhe8192" 00:13:59.425 ] 00:13:59.425 } 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "method": "bdev_nvme_set_hotplug", 00:13:59.425 "params": { 00:13:59.425 "period_us": 100000, 00:13:59.425 "enable": false 00:13:59.425 } 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "method": "bdev_malloc_create", 00:13:59.425 "params": { 00:13:59.425 "name": "malloc0", 00:13:59.425 "num_blocks": 8192, 00:13:59.425 "block_size": 4096, 00:13:59.425 "physical_block_size": 4096, 00:13:59.425 "uuid": "3e9013b0-c05d-4bc5-ad46-ae16572ad097", 00:13:59.425 "optimal_io_boundary": 0, 00:13:59.425 "md_size": 0, 00:13:59.425 "dif_type": 0, 00:13:59.425 "dif_is_head_of_md": false, 00:13:59.425 "dif_pi_format": 0 00:13:59.425 } 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "method": "bdev_wait_for_examine" 00:13:59.425 } 00:13:59.425 ] 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "subsystem": "scsi", 00:13:59.425 "config": null 00:13:59.425 }, 00:13:59.425 { 00:13:59.425 "subsystem": "scheduler", 00:13:59.425 "config": [ 00:13:59.425 { 00:13:59.425 "method": "framework_set_scheduler", 00:13:59.425 "params": { 00:13:59.425 "name": "static" 00:13:59.425 } 00:13:59.425 } 00:13:59.426 ] 00:13:59.426 }, 00:13:59.426 { 00:13:59.426 "subsystem": "vhost_scsi", 00:13:59.426 "config": [] 00:13:59.426 }, 00:13:59.426 { 00:13:59.426 "subsystem": "vhost_blk", 00:13:59.426 "config": [] 00:13:59.426 }, 00:13:59.426 { 00:13:59.426 "subsystem": "ublk", 00:13:59.426 "config": [ 00:13:59.426 { 00:13:59.426 "method": "ublk_create_target", 00:13:59.426 "params": { 00:13:59.426 "cpumask": "1" 00:13:59.426 } 00:13:59.426 }, 00:13:59.426 { 00:13:59.426 "method": "ublk_start_disk", 00:13:59.426 "params": { 00:13:59.426 "bdev_name": "malloc0", 00:13:59.426 "ublk_id": 0, 00:13:59.426 "num_queues": 1, 00:13:59.426 "queue_depth": 128 00:13:59.426 } 00:13:59.426 } 00:13:59.426 ] 00:13:59.426 }, 00:13:59.426 { 00:13:59.426 "subsystem": "nbd", 00:13:59.426 "config": [] 00:13:59.426 }, 00:13:59.426 { 00:13:59.426 "subsystem": "nvmf", 00:13:59.426 "config": [ 00:13:59.426 { 00:13:59.426 "method": "nvmf_set_config", 00:13:59.426 "params": { 00:13:59.426 "discovery_filter": "match_any", 00:13:59.426 "admin_cmd_passthru": { 00:13:59.426 "identify_ctrlr": false 00:13:59.426 }, 00:13:59.426 "dhchap_digests": [ 00:13:59.426 "sha256", 00:13:59.426 "sha384", 00:13:59.426 "sha512" 00:13:59.426 ], 00:13:59.426 "dhchap_dhgroups": [ 00:13:59.426 "null", 00:13:59.426 "ffdhe2048", 00:13:59.426 "ffdhe3072", 00:13:59.426 "ffdhe4096", 00:13:59.426 "ffdhe6144", 00:13:59.426 "ffdhe8192" 00:13:59.426 ] 00:13:59.426 } 00:13:59.426 }, 00:13:59.426 { 00:13:59.426 "method": "nvmf_set_max_subsystems", 00:13:59.426 "params": { 00:13:59.426 "max_subsystems": 1024 00:13:59.426 } 00:13:59.426 }, 00:13:59.426 { 00:13:59.426 "method": "nvmf_set_crdt", 00:13:59.426 "params": { 00:13:59.426 "crdt1": 0, 00:13:59.426 "crdt2": 0, 00:13:59.426 "crdt3": 0 00:13:59.426 } 00:13:59.426 } 00:13:59.426 ] 00:13:59.426 }, 00:13:59.426 { 00:13:59.426 "subsystem": "iscsi", 00:13:59.426 "config": [ 00:13:59.426 { 00:13:59.426 "method": "iscsi_set_options", 00:13:59.426 "params": { 00:13:59.426 "node_base": "iqn.2016-06.io.spdk", 00:13:59.426 "max_sessions": 128, 00:13:59.426 "max_connections_per_session": 2, 00:13:59.426 "max_queue_depth": 64, 00:13:59.426 "default_time2wait": 2, 00:13:59.426 "default_time2retain": 20, 00:13:59.426 "first_burst_length": 8192, 00:13:59.426 "immediate_data": true, 00:13:59.426 "allow_duplicated_isid": false, 00:13:59.426 "error_recovery_level": 0, 00:13:59.426 "nop_timeout": 60, 00:13:59.426 "nop_in_interval": 30, 00:13:59.426 "disable_chap": false, 00:13:59.426 "require_chap": false, 00:13:59.426 "mutual_chap": false, 00:13:59.426 "chap_group": 0, 00:13:59.426 "max_large_datain_per_connection": 64, 00:13:59.426 "max_r2t_per_connection": 4, 00:13:59.426 "pdu_pool_size": 36864, 00:13:59.426 "immediate_data_pool_size": 16384, 00:13:59.426 "data_out_pool_size": 2048 00:13:59.426 } 00:13:59.426 } 00:13:59.426 ] 00:13:59.426 } 00:13:59.426 ] 00:13:59.426 }' 00:13:59.426 14:21:41 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82773 00:13:59.426 14:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82773 ']' 00:13:59.426 14:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82773 00:13:59.426 14:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:59.426 14:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.426 14:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82773 00:13:59.426 14:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:59.426 14:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:59.426 killing process with pid 82773 00:13:59.426 14:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82773' 00:13:59.426 14:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82773 00:13:59.426 14:21:41 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82773 00:13:59.687 [2024-11-29 14:21:41.455035] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:59.949 [2024-11-29 14:21:41.501550] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:59.949 [2024-11-29 14:21:41.501702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:59.949 [2024-11-29 14:21:41.509538] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:59.949 [2024-11-29 14:21:41.509615] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:59.949 [2024-11-29 14:21:41.509625] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:59.949 [2024-11-29 14:21:41.509656] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:59.949 [2024-11-29 14:21:41.509813] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:00.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.522 14:21:42 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82811 00:14:00.522 14:21:42 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:00.522 14:21:42 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82811 00:14:00.522 14:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82811 ']' 00:14:00.522 14:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.522 14:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:00.522 14:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.522 14:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:00.522 14:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:00.522 14:21:42 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:00.522 "subsystems": [ 00:14:00.522 { 00:14:00.522 "subsystem": "fsdev", 00:14:00.522 "config": [ 00:14:00.522 { 00:14:00.522 "method": "fsdev_set_opts", 00:14:00.522 "params": { 00:14:00.522 "fsdev_io_pool_size": 65535, 00:14:00.522 "fsdev_io_cache_size": 256 00:14:00.522 } 00:14:00.522 } 00:14:00.522 ] 00:14:00.522 }, 00:14:00.522 { 00:14:00.522 "subsystem": "keyring", 00:14:00.522 "config": [] 00:14:00.522 }, 00:14:00.522 { 00:14:00.522 "subsystem": "iobuf", 00:14:00.522 "config": [ 00:14:00.523 { 00:14:00.523 "method": "iobuf_set_options", 00:14:00.523 "params": { 00:14:00.523 "small_pool_count": 8192, 00:14:00.523 "large_pool_count": 1024, 00:14:00.523 "small_bufsize": 8192, 00:14:00.523 "large_bufsize": 135168 00:14:00.523 } 00:14:00.523 } 00:14:00.523 ] 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "subsystem": "sock", 00:14:00.523 "config": [ 00:14:00.523 { 00:14:00.523 "method": "sock_set_default_impl", 00:14:00.523 "params": { 00:14:00.523 "impl_name": "posix" 00:14:00.523 } 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "method": "sock_impl_set_options", 00:14:00.523 "params": { 00:14:00.523 "impl_name": "ssl", 00:14:00.523 "recv_buf_size": 4096, 00:14:00.523 "send_buf_size": 4096, 00:14:00.523 "enable_recv_pipe": true, 00:14:00.523 "enable_quickack": false, 00:14:00.523 "enable_placement_id": 0, 00:14:00.523 "enable_zerocopy_send_server": true, 00:14:00.523 "enable_zerocopy_send_client": false, 00:14:00.523 "zerocopy_threshold": 0, 00:14:00.523 "tls_version": 0, 00:14:00.523 "enable_ktls": false 00:14:00.523 } 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "method": "sock_impl_set_options", 00:14:00.523 "params": { 00:14:00.523 "impl_name": "posix", 00:14:00.523 "recv_buf_size": 2097152, 00:14:00.523 "send_buf_size": 2097152, 00:14:00.523 "enable_recv_pipe": true, 00:14:00.523 "enable_quickack": false, 00:14:00.523 "enable_placement_id": 0, 00:14:00.523 "enable_zerocopy_send_server": true, 00:14:00.523 "enable_zerocopy_send_client": false, 00:14:00.523 "zerocopy_threshold": 0, 00:14:00.523 "tls_version": 0, 00:14:00.523 "enable_ktls": false 00:14:00.523 } 00:14:00.523 } 00:14:00.523 ] 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "subsystem": "vmd", 00:14:00.523 "config": [] 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "subsystem": "accel", 00:14:00.523 "config": [ 00:14:00.523 { 00:14:00.523 "method": "accel_set_options", 00:14:00.523 "params": { 00:14:00.523 "small_cache_size": 128, 00:14:00.523 "large_cache_size": 16, 00:14:00.523 "task_count": 2048, 00:14:00.523 "sequence_count": 2048, 00:14:00.523 "buf_count": 2048 00:14:00.523 } 00:14:00.523 } 00:14:00.523 ] 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "subsystem": "bdev", 00:14:00.523 "config": [ 00:14:00.523 { 00:14:00.523 "method": "bdev_set_options", 00:14:00.523 "params": { 00:14:00.523 "bdev_io_pool_size": 65535, 00:14:00.523 "bdev_io_cache_size": 256, 00:14:00.523 "bdev_auto_examine": true, 00:14:00.523 "iobuf_small_cache_size": 128, 00:14:00.523 "iobuf_large_cache_size": 16 00:14:00.523 } 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "method": "bdev_raid_set_options", 00:14:00.523 "params": { 00:14:00.523 "process_window_size_kb": 1024, 00:14:00.523 "process_max_bandwidth_mb_sec": 0 00:14:00.523 } 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "method": "bdev_iscsi_set_options", 00:14:00.523 "params": { 00:14:00.523 "timeout_sec": 30 00:14:00.523 } 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "method": "bdev_nvme_set_options", 00:14:00.523 "params": { 00:14:00.523 "action_on_timeout": "none", 00:14:00.523 "timeout_us": 0, 00:14:00.523 "timeout_admin_us": 0, 00:14:00.523 "keep_alive_timeout_ms": 10000, 00:14:00.523 "arbitration_burst": 0, 00:14:00.523 "low_priority_weight": 0, 00:14:00.523 "medium_priority_weight": 0, 00:14:00.523 "high_priority_weight": 0, 00:14:00.523 "nvme_adminq_poll_period_us": 10000, 00:14:00.523 "nvme_ioq_poll_period_us": 0, 00:14:00.523 "io_queue_requests": 0, 00:14:00.523 "delay_cmd_submit": true, 00:14:00.523 "transport_retry_count": 4, 00:14:00.523 "bdev_retry_count": 3, 00:14:00.523 "transport_ack_timeout": 0, 00:14:00.523 "ctrlr_loss_timeout_sec": 0, 00:14:00.523 "reconnect_delay_sec": 0, 00:14:00.523 "fast_io_fail_timeout_sec": 0, 00:14:00.523 "disable_auto_failback": false, 00:14:00.523 "generate_uuids": false, 00:14:00.523 "transport_tos": 0, 00:14:00.523 "nvme_error_stat": false, 00:14:00.523 "rdma_srq_size": 0, 00:14:00.523 "io_path_stat": false, 00:14:00.523 "allow_accel_sequence": false, 00:14:00.523 "rdma_max_cq_size": 0, 00:14:00.523 "rdma_cm_event_timeout_ms": 0, 00:14:00.523 "dhchap_digests": [ 00:14:00.523 "sha256", 00:14:00.523 "sha384", 00:14:00.523 "sha512" 00:14:00.523 ], 00:14:00.523 "dhchap_dhgroups": [ 00:14:00.523 "null", 00:14:00.523 "ffdhe2048", 00:14:00.523 "ffdhe3072", 00:14:00.523 "ffdhe4096", 00:14:00.523 "ffdhe6144", 00:14:00.523 "ffdhe8192" 00:14:00.523 ] 00:14:00.523 } 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "method": "bdev_nvme_set_hotplug", 00:14:00.523 "params": { 00:14:00.523 "period_us": 100000, 00:14:00.523 "enable": false 00:14:00.523 } 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "method": "bdev_malloc_create", 00:14:00.523 "params": { 00:14:00.523 "name": "malloc0", 00:14:00.523 "num_blocks": 8192, 00:14:00.523 "block_size": 4096, 00:14:00.523 "physical_block_size": 4096, 00:14:00.523 "uuid": "3e9013b0-c05d-4bc5-ad46-ae16572ad097", 00:14:00.523 "optimal_io_boundary": 0, 00:14:00.523 "md_size": 0, 00:14:00.523 "dif_type": 0, 00:14:00.523 "dif_is_head_of_md": false, 00:14:00.523 "dif_pi_format": 0 00:14:00.523 } 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "method": "bdev_wait_for_examine" 00:14:00.523 } 00:14:00.523 ] 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "subsystem": "scsi", 00:14:00.523 "config": null 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "subsystem": "scheduler", 00:14:00.523 "config": [ 00:14:00.523 { 00:14:00.523 "method": "framework_set_scheduler", 00:14:00.523 "params": { 00:14:00.523 "name": "static" 00:14:00.523 } 00:14:00.523 } 00:14:00.523 ] 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "subsystem": "vhost_scsi", 00:14:00.523 "config": [] 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "subsystem": "vhost_blk", 00:14:00.523 "config": [] 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "subsystem": "ublk", 00:14:00.523 "config": [ 00:14:00.523 { 00:14:00.523 "method": "ublk_create_target", 00:14:00.523 "params": { 00:14:00.523 "cpumask": "1" 00:14:00.523 } 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "method": "ublk_start_disk", 00:14:00.523 "params": { 00:14:00.523 "bdev_name": "malloc0", 00:14:00.523 "ublk_id": 0, 00:14:00.523 "num_queues": 1, 00:14:00.523 "queue_depth": 128 00:14:00.523 } 00:14:00.523 } 00:14:00.523 ] 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "subsystem": "nbd", 00:14:00.523 "config": [] 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "subsystem": "nvmf", 00:14:00.523 "config": [ 00:14:00.523 { 00:14:00.523 "method": "nvmf_set_config", 00:14:00.523 "params": { 00:14:00.523 "discovery_filter": "match_any", 00:14:00.523 "admin_cmd_passthru": { 00:14:00.523 "identify_ctrlr": false 00:14:00.523 }, 00:14:00.523 "dhchap_digests": [ 00:14:00.523 "sha256", 00:14:00.523 "sha384", 00:14:00.523 "sha512" 00:14:00.523 ], 00:14:00.523 "dhchap_dhgroups": [ 00:14:00.523 "null", 00:14:00.523 "ffdhe2048", 00:14:00.523 "ffdhe3072", 00:14:00.523 "ffdhe4096", 00:14:00.523 "ffdhe6144", 00:14:00.523 "ffdhe8192" 00:14:00.523 ] 00:14:00.523 } 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "method": "nvmf_set_max_subsystems", 00:14:00.523 "params": { 00:14:00.523 "max_subsystems": 1024 00:14:00.523 } 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "method": "nvmf_set_crdt", 00:14:00.523 "params": { 00:14:00.523 "crdt1": 0, 00:14:00.523 "crdt2": 0, 00:14:00.523 "crdt3": 0 00:14:00.523 } 00:14:00.523 } 00:14:00.523 ] 00:14:00.523 }, 00:14:00.523 { 00:14:00.523 "subsystem": "iscsi", 00:14:00.523 "config": [ 00:14:00.523 { 00:14:00.523 "method": "iscsi_set_options", 00:14:00.523 "params": { 00:14:00.523 "node_base": "iqn.2016-06.io.spdk", 00:14:00.523 "max_sessions": 128, 00:14:00.523 "max_connections_per_session": 2, 00:14:00.523 "max_queue_depth": 64, 00:14:00.523 "default_time2wait": 2, 00:14:00.523 "default_time2retain": 20, 00:14:00.523 "first_burst_length": 8192, 00:14:00.523 "immediate_data": true, 00:14:00.523 "allow_duplicated_isid": false, 00:14:00.523 "error_recovery_level": 0, 00:14:00.523 "nop_timeout": 60, 00:14:00.523 "nop_in_interval": 30, 00:14:00.523 "disable_chap": false, 00:14:00.523 "require_chap": false, 00:14:00.523 "mutual_chap": false, 00:14:00.523 "chap_group": 0, 00:14:00.523 "max_large_datain_per_connection": 64, 00:14:00.523 "max_r2t_per_connection": 4, 00:14:00.524 "pdu_pool_size": 36864, 00:14:00.524 "immediate_data_pool_size": 16384, 00:14:00.524 "data_out_pool_size": 2048 00:14:00.524 } 00:14:00.524 } 00:14:00.524 ] 00:14:00.524 } 00:14:00.524 ] 00:14:00.524 }' 00:14:00.524 [2024-11-29 14:21:42.109764] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:00.524 [2024-11-29 14:21:42.109913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82811 ] 00:14:00.524 [2024-11-29 14:21:42.263314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.785 [2024-11-29 14:21:42.320330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.047 [2024-11-29 14:21:42.669511] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:01.047 [2024-11-29 14:21:42.669851] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:01.047 [2024-11-29 14:21:42.677676] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:01.047 [2024-11-29 14:21:42.677765] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:01.047 [2024-11-29 14:21:42.677774] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:01.047 [2024-11-29 14:21:42.677781] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:01.047 [2024-11-29 14:21:42.686618] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:01.047 [2024-11-29 14:21:42.686675] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:01.047 [2024-11-29 14:21:42.693531] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:01.047 [2024-11-29 14:21:42.693650] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:01.047 [2024-11-29 14:21:42.710525] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:01.306 14:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:01.306 14:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:01.306 14:21:42 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:01.306 14:21:42 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:01.307 14:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.307 14:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:01.307 14:21:42 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.307 14:21:43 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:01.307 14:21:43 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:01.307 14:21:43 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82811 00:14:01.307 14:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82811 ']' 00:14:01.307 14:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82811 00:14:01.307 14:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:01.307 14:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:01.307 14:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82811 00:14:01.307 14:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:01.307 killing process with pid 82811 00:14:01.307 14:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:01.307 14:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82811' 00:14:01.307 14:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82811 00:14:01.307 14:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82811 00:14:01.567 [2024-11-29 14:21:43.216284] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:01.567 [2024-11-29 14:21:43.253595] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:01.567 [2024-11-29 14:21:43.253724] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:01.567 [2024-11-29 14:21:43.260523] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:01.567 [2024-11-29 14:21:43.260581] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:01.567 [2024-11-29 14:21:43.260589] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:01.567 [2024-11-29 14:21:43.260621] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:01.567 [2024-11-29 14:21:43.260765] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:02.138 14:21:43 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:02.138 00:14:02.138 real 0m3.867s 00:14:02.138 user 0m2.651s 00:14:02.138 sys 0m1.893s 00:14:02.138 ************************************ 00:14:02.138 END TEST test_save_ublk_config 00:14:02.138 ************************************ 00:14:02.138 14:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.138 14:21:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:02.138 14:21:43 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82861 00:14:02.138 14:21:43 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:02.138 14:21:43 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:02.138 14:21:43 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82861 00:14:02.138 14:21:43 ublk -- common/autotest_common.sh@831 -- # '[' -z 82861 ']' 00:14:02.138 14:21:43 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.138 14:21:43 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.138 14:21:43 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.138 14:21:43 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.139 14:21:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.139 [2024-11-29 14:21:43.828508] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:02.139 [2024-11-29 14:21:43.828676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82861 ] 00:14:02.398 [2024-11-29 14:21:43.984180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:02.398 [2024-11-29 14:21:44.033675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.398 [2024-11-29 14:21:44.033771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.968 14:21:44 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:02.968 14:21:44 ublk -- common/autotest_common.sh@864 -- # return 0 00:14:02.968 14:21:44 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:02.968 14:21:44 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:02.968 14:21:44 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.968 14:21:44 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.968 ************************************ 00:14:02.968 START TEST test_create_ublk 00:14:02.968 ************************************ 00:14:02.968 14:21:44 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:14:02.968 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:02.968 14:21:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.968 14:21:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.968 [2024-11-29 14:21:44.713520] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:02.968 [2024-11-29 14:21:44.715327] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:02.968 14:21:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.968 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:02.969 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:02.969 14:21:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.969 14:21:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:03.229 14:21:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.229 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:03.229 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:03.229 14:21:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.229 14:21:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:03.229 [2024-11-29 14:21:44.810722] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:03.229 [2024-11-29 14:21:44.811194] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:03.229 [2024-11-29 14:21:44.811207] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:03.229 [2024-11-29 14:21:44.811217] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:03.229 [2024-11-29 14:21:44.819862] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:03.229 [2024-11-29 14:21:44.819905] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:03.229 [2024-11-29 14:21:44.826539] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:03.229 [2024-11-29 14:21:44.827278] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:03.229 [2024-11-29 14:21:44.850543] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:03.229 14:21:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.229 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:03.229 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:03.229 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:03.229 14:21:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.229 14:21:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:03.229 14:21:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.229 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:03.229 { 00:14:03.229 "ublk_device": "/dev/ublkb0", 00:14:03.229 "id": 0, 00:14:03.229 "queue_depth": 512, 00:14:03.229 "num_queues": 4, 00:14:03.229 "bdev_name": "Malloc0" 00:14:03.229 } 00:14:03.229 ]' 00:14:03.229 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:03.229 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:03.229 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:03.229 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:03.229 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:03.229 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:03.229 14:21:44 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:03.229 14:21:45 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:03.229 14:21:45 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:03.507 14:21:45 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:03.507 14:21:45 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:03.507 14:21:45 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:03.507 14:21:45 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:03.507 14:21:45 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:03.507 14:21:45 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:03.507 14:21:45 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:03.507 14:21:45 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:03.507 14:21:45 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:03.508 14:21:45 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:03.508 14:21:45 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:03.509 14:21:45 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:03.509 14:21:45 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:03.509 fio: verification read phase will never start because write phase uses all of runtime 00:14:03.509 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:03.509 fio-3.35 00:14:03.509 Starting 1 process 00:14:15.704 00:14:15.704 fio_test: (groupid=0, jobs=1): err= 0: pid=82906: Fri Nov 29 14:21:55 2024 00:14:15.704 write: IOPS=14.0k, BW=54.8MiB/s (57.5MB/s)(548MiB/10001msec); 0 zone resets 00:14:15.704 clat (usec): min=34, max=7860, avg=70.50, stdev=123.68 00:14:15.704 lat (usec): min=35, max=7881, avg=70.92, stdev=123.76 00:14:15.704 clat percentiles (usec): 00:14:15.704 | 1.00th=[ 54], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 59], 00:14:15.704 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:14:15.704 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 73], 95.00th=[ 80], 00:14:15.704 | 99.00th=[ 141], 99.50th=[ 247], 99.90th=[ 2638], 99.95th=[ 3556], 00:14:15.704 | 99.99th=[ 4015] 00:14:15.704 bw ( KiB/s): min= 9824, max=61392, per=99.68%, avg=55968.42, stdev=13427.53, samples=19 00:14:15.704 iops : min= 2456, max=15348, avg=13992.11, stdev=3356.88, samples=19 00:14:15.704 lat (usec) : 50=0.04%, 100=96.65%, 250=2.84%, 500=0.28%, 750=0.01% 00:14:15.704 lat (usec) : 1000=0.01% 00:14:15.704 lat (msec) : 2=0.05%, 4=0.12%, 10=0.01% 00:14:15.704 cpu : usr=2.01%, sys=10.70%, ctx=140381, majf=0, minf=797 00:14:15.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:15.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.704 issued rwts: total=0,140381,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:15.705 00:14:15.705 Run status group 0 (all jobs): 00:14:15.705 WRITE: bw=54.8MiB/s (57.5MB/s), 54.8MiB/s-54.8MiB/s (57.5MB/s-57.5MB/s), io=548MiB (575MB), run=10001-10001msec 00:14:15.705 00:14:15.705 Disk stats (read/write): 00:14:15.705 ublkb0: ios=0/138806, merge=0/0, ticks=0/8636, in_queue=8636, util=99.09% 00:14:15.705 14:21:55 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.705 [2024-11-29 14:21:55.288606] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:15.705 [2024-11-29 14:21:55.322017] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:15.705 [2024-11-29 14:21:55.323020] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:15.705 [2024-11-29 14:21:55.328526] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:15.705 [2024-11-29 14:21:55.328782] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:15.705 [2024-11-29 14:21:55.328794] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.705 14:21:55 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.705 [2024-11-29 14:21:55.343616] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:15.705 request: 00:14:15.705 { 00:14:15.705 "ublk_id": 0, 00:14:15.705 "method": "ublk_stop_disk", 00:14:15.705 "req_id": 1 00:14:15.705 } 00:14:15.705 Got JSON-RPC error response 00:14:15.705 response: 00:14:15.705 { 00:14:15.705 "code": -19, 00:14:15.705 "message": "No such device" 00:14:15.705 } 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:15.705 14:21:55 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.705 [2024-11-29 14:21:55.360573] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:15.705 [2024-11-29 14:21:55.361764] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:15.705 [2024-11-29 14:21:55.361792] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.705 14:21:55 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.705 14:21:55 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:15.705 14:21:55 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.705 14:21:55 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:15.705 14:21:55 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:15.705 14:21:55 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:15.705 14:21:55 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.705 14:21:55 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:15.705 14:21:55 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:15.705 14:21:55 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:15.705 00:14:15.705 real 0m10.817s 00:14:15.705 user 0m0.519s 00:14:15.705 sys 0m1.150s 00:14:15.705 ************************************ 00:14:15.705 END TEST test_create_ublk 00:14:15.705 ************************************ 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.705 14:21:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.705 14:21:55 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:15.705 14:21:55 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:15.705 14:21:55 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.705 14:21:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.705 ************************************ 00:14:15.705 START TEST test_create_multi_ublk 00:14:15.705 ************************************ 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.705 [2024-11-29 14:21:55.575503] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:15.705 [2024-11-29 14:21:55.576383] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.705 [2024-11-29 14:21:55.647636] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:15.705 [2024-11-29 14:21:55.647937] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:15.705 [2024-11-29 14:21:55.647949] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:15.705 [2024-11-29 14:21:55.647955] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:15.705 [2024-11-29 14:21:55.659529] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:15.705 [2024-11-29 14:21:55.659545] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:15.705 [2024-11-29 14:21:55.671513] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:15.705 [2024-11-29 14:21:55.671995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:15.705 [2024-11-29 14:21:55.692535] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.705 [2024-11-29 14:21:55.779606] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:15.705 [2024-11-29 14:21:55.779900] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:15.705 [2024-11-29 14:21:55.779911] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:15.705 [2024-11-29 14:21:55.779917] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:15.705 [2024-11-29 14:21:55.791530] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:15.705 [2024-11-29 14:21:55.791550] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:15.705 [2024-11-29 14:21:55.803518] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:15.705 [2024-11-29 14:21:55.804007] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:15.705 [2024-11-29 14:21:55.828517] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:15.705 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.706 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:15.706 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.706 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.706 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.706 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:15.706 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:15.706 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.706 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.706 [2024-11-29 14:21:55.911608] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:15.706 [2024-11-29 14:21:55.911900] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:15.706 [2024-11-29 14:21:55.911908] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:15.706 [2024-11-29 14:21:55.911913] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:15.706 [2024-11-29 14:21:55.923526] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:15.706 [2024-11-29 14:21:55.923544] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:15.706 [2024-11-29 14:21:55.935520] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:15.706 [2024-11-29 14:21:55.935999] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:15.706 [2024-11-29 14:21:55.960525] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:15.706 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.706 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:15.706 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.706 14:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:15.706 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.706 14:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.706 [2024-11-29 14:21:56.043608] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:15.706 [2024-11-29 14:21:56.043912] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:15.706 [2024-11-29 14:21:56.043925] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:15.706 [2024-11-29 14:21:56.043932] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:15.706 [2024-11-29 14:21:56.055530] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:15.706 [2024-11-29 14:21:56.055551] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:15.706 [2024-11-29 14:21:56.067523] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:15.706 [2024-11-29 14:21:56.068003] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:15.706 [2024-11-29 14:21:56.080534] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:15.706 { 00:14:15.706 "ublk_device": "/dev/ublkb0", 00:14:15.706 "id": 0, 00:14:15.706 "queue_depth": 512, 00:14:15.706 "num_queues": 4, 00:14:15.706 "bdev_name": "Malloc0" 00:14:15.706 }, 00:14:15.706 { 00:14:15.706 "ublk_device": "/dev/ublkb1", 00:14:15.706 "id": 1, 00:14:15.706 "queue_depth": 512, 00:14:15.706 "num_queues": 4, 00:14:15.706 "bdev_name": "Malloc1" 00:14:15.706 }, 00:14:15.706 { 00:14:15.706 "ublk_device": "/dev/ublkb2", 00:14:15.706 "id": 2, 00:14:15.706 "queue_depth": 512, 00:14:15.706 "num_queues": 4, 00:14:15.706 "bdev_name": "Malloc2" 00:14:15.706 }, 00:14:15.706 { 00:14:15.706 "ublk_device": "/dev/ublkb3", 00:14:15.706 "id": 3, 00:14:15.706 "queue_depth": 512, 00:14:15.706 "num_queues": 4, 00:14:15.706 "bdev_name": "Malloc3" 00:14:15.706 } 00:14:15.706 ]' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.706 [2024-11-29 14:21:56.743577] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:15.706 [2024-11-29 14:21:56.779515] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:15.706 [2024-11-29 14:21:56.780401] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:15.706 [2024-11-29 14:21:56.783684] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:15.706 [2024-11-29 14:21:56.783932] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:15.706 [2024-11-29 14:21:56.783939] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.706 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.706 [2024-11-29 14:21:56.802585] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:15.706 [2024-11-29 14:21:56.834559] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:15.706 [2024-11-29 14:21:56.835352] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:15.706 [2024-11-29 14:21:56.842523] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:15.707 [2024-11-29 14:21:56.842762] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:15.707 [2024-11-29 14:21:56.842768] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:15.707 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.707 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.707 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:15.707 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.707 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.707 [2024-11-29 14:21:56.858588] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:15.707 [2024-11-29 14:21:56.889014] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:15.707 [2024-11-29 14:21:56.890124] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:15.707 [2024-11-29 14:21:56.898524] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:15.707 [2024-11-29 14:21:56.898775] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:15.707 [2024-11-29 14:21:56.898782] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:15.707 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.707 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.707 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:15.707 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.707 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.707 [2024-11-29 14:21:56.914566] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:15.707 [2024-11-29 14:21:56.946546] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:15.707 [2024-11-29 14:21:56.947218] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:15.707 [2024-11-29 14:21:56.954512] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:15.707 [2024-11-29 14:21:56.954760] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:15.707 [2024-11-29 14:21:56.954766] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:15.707 14:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.707 14:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:15.707 [2024-11-29 14:21:57.146573] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:15.707 [2024-11-29 14:21:57.147787] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:15.707 [2024-11-29 14:21:57.147816] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:15.707 00:14:15.707 real 0m1.923s 00:14:15.707 user 0m0.794s 00:14:15.707 sys 0m0.146s 00:14:15.707 ************************************ 00:14:15.707 END TEST test_create_multi_ublk 00:14:15.707 ************************************ 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.707 14:21:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.964 14:21:57 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:15.964 14:21:57 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:15.964 14:21:57 ublk -- ublk/ublk.sh@130 -- # killprocess 82861 00:14:15.964 14:21:57 ublk -- common/autotest_common.sh@950 -- # '[' -z 82861 ']' 00:14:15.964 14:21:57 ublk -- common/autotest_common.sh@954 -- # kill -0 82861 00:14:15.964 14:21:57 ublk -- common/autotest_common.sh@955 -- # uname 00:14:15.964 14:21:57 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:15.964 14:21:57 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82861 00:14:15.964 14:21:57 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:15.964 killing process with pid 82861 00:14:15.964 14:21:57 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:15.964 14:21:57 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82861' 00:14:15.964 14:21:57 ublk -- common/autotest_common.sh@969 -- # kill 82861 00:14:15.964 14:21:57 ublk -- common/autotest_common.sh@974 -- # wait 82861 00:14:15.964 [2024-11-29 14:21:57.705636] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:15.964 [2024-11-29 14:21:57.705689] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:16.224 00:14:16.224 real 0m18.375s 00:14:16.224 user 0m28.409s 00:14:16.224 sys 0m7.281s 00:14:16.224 14:21:57 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:16.224 ************************************ 00:14:16.224 END TEST ublk 00:14:16.224 ************************************ 00:14:16.224 14:21:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.483 14:21:58 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:16.483 14:21:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:16.483 14:21:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.483 14:21:58 -- common/autotest_common.sh@10 -- # set +x 00:14:16.483 ************************************ 00:14:16.483 START TEST ublk_recovery 00:14:16.483 ************************************ 00:14:16.483 14:21:58 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:16.483 * Looking for test storage... 00:14:16.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:16.483 14:21:58 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:16.483 14:21:58 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:16.483 14:21:58 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:16.483 14:21:58 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:16.483 14:21:58 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:16.484 14:21:58 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:16.484 14:21:58 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.484 14:21:58 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:16.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.484 --rc genhtml_branch_coverage=1 00:14:16.484 --rc genhtml_function_coverage=1 00:14:16.484 --rc genhtml_legend=1 00:14:16.484 --rc geninfo_all_blocks=1 00:14:16.484 --rc geninfo_unexecuted_blocks=1 00:14:16.484 00:14:16.484 ' 00:14:16.484 14:21:58 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:16.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.484 --rc genhtml_branch_coverage=1 00:14:16.484 --rc genhtml_function_coverage=1 00:14:16.484 --rc genhtml_legend=1 00:14:16.484 --rc geninfo_all_blocks=1 00:14:16.484 --rc geninfo_unexecuted_blocks=1 00:14:16.484 00:14:16.484 ' 00:14:16.484 14:21:58 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:16.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.484 --rc genhtml_branch_coverage=1 00:14:16.484 --rc genhtml_function_coverage=1 00:14:16.484 --rc genhtml_legend=1 00:14:16.484 --rc geninfo_all_blocks=1 00:14:16.484 --rc geninfo_unexecuted_blocks=1 00:14:16.484 00:14:16.484 ' 00:14:16.484 14:21:58 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:16.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.484 --rc genhtml_branch_coverage=1 00:14:16.484 --rc genhtml_function_coverage=1 00:14:16.484 --rc genhtml_legend=1 00:14:16.484 --rc geninfo_all_blocks=1 00:14:16.484 --rc geninfo_unexecuted_blocks=1 00:14:16.484 00:14:16.484 ' 00:14:16.484 14:21:58 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:16.484 14:21:58 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:16.484 14:21:58 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:16.484 14:21:58 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:16.484 14:21:58 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:16.484 14:21:58 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:16.484 14:21:58 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:16.484 14:21:58 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:16.484 14:21:58 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:16.484 14:21:58 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:16.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.484 14:21:58 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=83223 00:14:16.484 14:21:58 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:16.484 14:21:58 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 83223 00:14:16.484 14:21:58 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83223 ']' 00:14:16.484 14:21:58 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.484 14:21:58 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.484 14:21:58 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.484 14:21:58 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.484 14:21:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:16.484 14:21:58 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:16.484 [2024-11-29 14:21:58.253236] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:16.484 [2024-11-29 14:21:58.253338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83223 ] 00:14:16.742 [2024-11-29 14:21:58.395539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:16.742 [2024-11-29 14:21:58.428001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.742 [2024-11-29 14:21:58.428087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.306 14:21:59 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.306 14:21:59 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:17.306 14:21:59 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:17.306 14:21:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.306 14:21:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.306 [2024-11-29 14:21:59.096508] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:17.306 [2024-11-29 14:21:59.097455] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:17.306 14:21:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.306 14:21:59 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:17.306 14:21:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.306 14:21:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.563 malloc0 00:14:17.563 14:21:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.563 14:21:59 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:17.563 14:21:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.563 14:21:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:17.563 [2024-11-29 14:21:59.128608] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:17.563 [2024-11-29 14:21:59.128699] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:17.563 [2024-11-29 14:21:59.128717] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:17.563 [2024-11-29 14:21:59.128725] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:17.563 [2024-11-29 14:21:59.137598] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:17.563 [2024-11-29 14:21:59.137622] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:17.563 [2024-11-29 14:21:59.144518] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:17.563 [2024-11-29 14:21:59.144631] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:17.563 [2024-11-29 14:21:59.159524] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:17.563 1 00:14:17.563 14:21:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.563 14:21:59 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:18.496 14:22:00 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=83256 00:14:18.496 14:22:00 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:18.496 14:22:00 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:18.496 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:18.496 fio-3.35 00:14:18.496 Starting 1 process 00:14:23.810 14:22:05 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 83223 00:14:23.810 14:22:05 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:29.094 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 83223 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:29.094 14:22:10 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=83367 00:14:29.094 14:22:10 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:29.094 14:22:10 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 83367 00:14:29.094 14:22:10 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:29.094 14:22:10 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83367 ']' 00:14:29.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.094 14:22:10 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.094 14:22:10 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:29.094 14:22:10 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.094 14:22:10 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:29.094 14:22:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.094 [2024-11-29 14:22:10.259047] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:29.094 [2024-11-29 14:22:10.259166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83367 ] 00:14:29.094 [2024-11-29 14:22:10.398975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:29.094 [2024-11-29 14:22:10.429566] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.094 [2024-11-29 14:22:10.429580] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.354 14:22:11 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:29.354 14:22:11 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:29.354 14:22:11 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:29.354 14:22:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.354 14:22:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.354 [2024-11-29 14:22:11.052514] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:29.354 [2024-11-29 14:22:11.053453] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:29.354 14:22:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.354 14:22:11 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:29.354 14:22:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.354 14:22:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.354 malloc0 00:14:29.354 14:22:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.354 14:22:11 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:29.354 14:22:11 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.354 14:22:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.354 [2024-11-29 14:22:11.085601] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:29.354 [2024-11-29 14:22:11.085633] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:29.354 [2024-11-29 14:22:11.085645] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:29.354 [2024-11-29 14:22:11.093538] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:29.354 [2024-11-29 14:22:11.093555] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:29.354 [2024-11-29 14:22:11.093566] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:29.354 1 00:14:29.354 [2024-11-29 14:22:11.093629] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:29.354 14:22:11 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.354 14:22:11 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 83256 00:14:29.354 [2024-11-29 14:22:11.101516] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:29.354 [2024-11-29 14:22:11.107864] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:29.354 [2024-11-29 14:22:11.115711] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:29.354 [2024-11-29 14:22:11.115729] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:25.574 00:15:25.574 fio_test: (groupid=0, jobs=1): err= 0: pid=83265: Fri Nov 29 14:23:00 2024 00:15:25.574 read: IOPS=26.1k, BW=102MiB/s (107MB/s)(6118MiB/60002msec) 00:15:25.574 slat (nsec): min=1002, max=295472, avg=5128.92, stdev=1536.31 00:15:25.574 clat (usec): min=1163, max=5952.1k, avg=2406.44, stdev=38027.68 00:15:25.574 lat (usec): min=1167, max=5952.2k, avg=2411.57, stdev=38027.68 00:15:25.574 clat percentiles (usec): 00:15:25.574 | 1.00th=[ 1778], 5.00th=[ 1860], 10.00th=[ 1876], 20.00th=[ 1909], 00:15:25.574 | 30.00th=[ 1942], 40.00th=[ 1991], 50.00th=[ 2073], 60.00th=[ 2114], 00:15:25.574 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2409], 95.00th=[ 2999], 00:15:25.574 | 99.00th=[ 4883], 99.50th=[ 5342], 99.90th=[ 6128], 99.95th=[ 7308], 00:15:25.574 | 99.99th=[12125] 00:15:25.574 bw ( KiB/s): min=28000, max=128376, per=100.00%, avg=114926.23, stdev=13622.42, samples=108 00:15:25.574 iops : min= 7000, max=32094, avg=28731.56, stdev=3405.60, samples=108 00:15:25.574 write: IOPS=26.1k, BW=102MiB/s (107MB/s)(6111MiB/60002msec); 0 zone resets 00:15:25.574 slat (nsec): min=1010, max=196931, avg=5269.59, stdev=1545.32 00:15:25.574 clat (usec): min=1259, max=5952.7k, avg=2488.90, stdev=38052.55 00:15:25.574 lat (usec): min=1264, max=5952.7k, avg=2494.17, stdev=38052.56 00:15:25.574 clat percentiles (usec): 00:15:25.574 | 1.00th=[ 1844], 5.00th=[ 1942], 10.00th=[ 1958], 20.00th=[ 1991], 00:15:25.574 | 30.00th=[ 2024], 40.00th=[ 2073], 50.00th=[ 2180], 60.00th=[ 2212], 00:15:25.574 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2474], 95.00th=[ 2966], 00:15:25.574 | 99.00th=[ 4883], 99.50th=[ 5342], 99.90th=[ 6194], 99.95th=[ 7635], 00:15:25.574 | 99.99th=[11994] 00:15:25.574 bw ( KiB/s): min=28520, max=127184, per=100.00%, avg=114798.57, stdev=13604.52, samples=108 00:15:25.574 iops : min= 7130, max=31796, avg=28699.64, stdev=3401.13, samples=108 00:15:25.574 lat (msec) : 2=31.02%, 4=66.65%, 10=2.31%, 20=0.01%, >=2000=0.01% 00:15:25.574 cpu : usr=6.04%, sys=27.74%, ctx=101950, majf=0, minf=14 00:15:25.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:25.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.574 issued rwts: total=1566155,1564337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.574 00:15:25.574 Run status group 0 (all jobs): 00:15:25.574 READ: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=6118MiB (6415MB), run=60002-60002msec 00:15:25.574 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=6111MiB (6408MB), run=60002-60002msec 00:15:25.574 00:15:25.574 Disk stats (read/write): 00:15:25.574 ublkb1: ios=1562682/1560899, merge=0/0, ticks=3674796/3670541, in_queue=7345338, util=99.90% 00:15:25.574 14:23:00 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.574 [2024-11-29 14:23:00.423895] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:25.574 [2024-11-29 14:23:00.459639] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:25.574 [2024-11-29 14:23:00.459777] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:25.574 [2024-11-29 14:23:00.467525] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:25.574 [2024-11-29 14:23:00.467616] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:25.574 [2024-11-29 14:23:00.467622] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.574 14:23:00 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.574 [2024-11-29 14:23:00.483570] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:25.574 [2024-11-29 14:23:00.484704] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:25.574 [2024-11-29 14:23:00.484734] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.574 14:23:00 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:25.574 14:23:00 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:25.574 14:23:00 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 83367 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 83367 ']' 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 83367 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83367 00:15:25.574 killing process with pid 83367 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83367' 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@969 -- # kill 83367 00:15:25.574 14:23:00 ublk_recovery -- common/autotest_common.sh@974 -- # wait 83367 00:15:25.574 [2024-11-29 14:23:00.684856] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:25.574 [2024-11-29 14:23:00.684926] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:25.575 00:15:25.575 real 1m2.941s 00:15:25.575 user 1m38.923s 00:15:25.575 sys 0m36.591s 00:15:25.575 14:23:00 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.575 ************************************ 00:15:25.575 END TEST ublk_recovery 00:15:25.575 14:23:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:25.575 ************************************ 00:15:25.575 14:23:01 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:25.575 14:23:01 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:25.575 14:23:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:25.575 14:23:01 -- common/autotest_common.sh@10 -- # set +x 00:15:25.575 14:23:01 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:25.575 14:23:01 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:25.575 14:23:01 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:25.575 14:23:01 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:25.575 14:23:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:25.575 14:23:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:25.575 14:23:01 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:25.575 14:23:01 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:25.575 14:23:01 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:25.575 14:23:01 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:25.575 14:23:01 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:25.575 14:23:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:25.575 14:23:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.575 14:23:01 -- common/autotest_common.sh@10 -- # set +x 00:15:25.575 ************************************ 00:15:25.575 START TEST ftl 00:15:25.575 ************************************ 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:25.575 * Looking for test storage... 00:15:25.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:25.575 14:23:01 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:25.575 14:23:01 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:25.575 14:23:01 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:25.575 14:23:01 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.575 14:23:01 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:25.575 14:23:01 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:25.575 14:23:01 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:25.575 14:23:01 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:25.575 14:23:01 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:25.575 14:23:01 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:25.575 14:23:01 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:25.575 14:23:01 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:25.575 14:23:01 ftl -- scripts/common.sh@345 -- # : 1 00:15:25.575 14:23:01 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:25.575 14:23:01 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.575 14:23:01 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:25.575 14:23:01 ftl -- scripts/common.sh@353 -- # local d=1 00:15:25.575 14:23:01 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.575 14:23:01 ftl -- scripts/common.sh@355 -- # echo 1 00:15:25.575 14:23:01 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:25.575 14:23:01 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:25.575 14:23:01 ftl -- scripts/common.sh@353 -- # local d=2 00:15:25.575 14:23:01 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.575 14:23:01 ftl -- scripts/common.sh@355 -- # echo 2 00:15:25.575 14:23:01 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:25.575 14:23:01 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:25.575 14:23:01 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:25.575 14:23:01 ftl -- scripts/common.sh@368 -- # return 0 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:25.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.575 --rc genhtml_branch_coverage=1 00:15:25.575 --rc genhtml_function_coverage=1 00:15:25.575 --rc genhtml_legend=1 00:15:25.575 --rc geninfo_all_blocks=1 00:15:25.575 --rc geninfo_unexecuted_blocks=1 00:15:25.575 00:15:25.575 ' 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:25.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.575 --rc genhtml_branch_coverage=1 00:15:25.575 --rc genhtml_function_coverage=1 00:15:25.575 --rc genhtml_legend=1 00:15:25.575 --rc geninfo_all_blocks=1 00:15:25.575 --rc geninfo_unexecuted_blocks=1 00:15:25.575 00:15:25.575 ' 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:25.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.575 --rc genhtml_branch_coverage=1 00:15:25.575 --rc genhtml_function_coverage=1 00:15:25.575 --rc genhtml_legend=1 00:15:25.575 --rc geninfo_all_blocks=1 00:15:25.575 --rc geninfo_unexecuted_blocks=1 00:15:25.575 00:15:25.575 ' 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:25.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.575 --rc genhtml_branch_coverage=1 00:15:25.575 --rc genhtml_function_coverage=1 00:15:25.575 --rc genhtml_legend=1 00:15:25.575 --rc geninfo_all_blocks=1 00:15:25.575 --rc geninfo_unexecuted_blocks=1 00:15:25.575 00:15:25.575 ' 00:15:25.575 14:23:01 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:25.575 14:23:01 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:25.575 14:23:01 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:25.575 14:23:01 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:25.575 14:23:01 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:25.575 14:23:01 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:25.575 14:23:01 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.575 14:23:01 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:25.575 14:23:01 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:25.575 14:23:01 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:25.575 14:23:01 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:25.575 14:23:01 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:25.575 14:23:01 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:25.575 14:23:01 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:25.575 14:23:01 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:25.575 14:23:01 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:25.575 14:23:01 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:25.575 14:23:01 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:25.575 14:23:01 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:25.575 14:23:01 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:25.575 14:23:01 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:25.575 14:23:01 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:25.575 14:23:01 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:25.575 14:23:01 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:25.575 14:23:01 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:25.575 14:23:01 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:25.575 14:23:01 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:25.575 14:23:01 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:25.575 14:23:01 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:25.575 14:23:01 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.575 14:23:01 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:25.575 14:23:01 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:25.575 14:23:01 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:25.575 14:23:01 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:25.575 14:23:01 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:25.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:25.575 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:25.575 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:25.575 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:25.575 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:25.575 14:23:01 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=84166 00:15:25.575 14:23:01 ftl -- ftl/ftl.sh@38 -- # waitforlisten 84166 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@831 -- # '[' -z 84166 ']' 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.575 14:23:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:25.575 14:23:01 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:25.575 [2024-11-29 14:23:01.834550] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:25.575 [2024-11-29 14:23:01.834703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84166 ] 00:15:25.575 [2024-11-29 14:23:01.983589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.575 [2024-11-29 14:23:02.013674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.575 14:23:02 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.575 14:23:02 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:25.575 14:23:02 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:25.575 14:23:02 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:25.575 14:23:03 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:25.576 14:23:03 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:25.576 14:23:03 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:25.576 14:23:03 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:25.576 14:23:03 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:25.576 14:23:03 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:25.576 14:23:03 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:25.576 14:23:03 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:25.576 14:23:03 ftl -- ftl/ftl.sh@50 -- # break 00:15:25.576 14:23:03 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:25.576 14:23:03 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:25.576 14:23:03 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:25.576 14:23:03 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:25.576 14:23:04 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:25.576 14:23:04 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:25.576 14:23:04 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:25.576 14:23:04 ftl -- ftl/ftl.sh@63 -- # break 00:15:25.576 14:23:04 ftl -- ftl/ftl.sh@66 -- # killprocess 84166 00:15:25.576 14:23:04 ftl -- common/autotest_common.sh@950 -- # '[' -z 84166 ']' 00:15:25.576 14:23:04 ftl -- common/autotest_common.sh@954 -- # kill -0 84166 00:15:25.576 14:23:04 ftl -- common/autotest_common.sh@955 -- # uname 00:15:25.576 14:23:04 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.576 14:23:04 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84166 00:15:25.576 killing process with pid 84166 00:15:25.576 14:23:04 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.576 14:23:04 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.576 14:23:04 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84166' 00:15:25.576 14:23:04 ftl -- common/autotest_common.sh@969 -- # kill 84166 00:15:25.576 14:23:04 ftl -- common/autotest_common.sh@974 -- # wait 84166 00:15:25.576 14:23:04 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:25.576 14:23:04 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:25.576 14:23:04 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:25.576 14:23:04 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.576 14:23:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:25.576 ************************************ 00:15:25.576 START TEST ftl_fio_basic 00:15:25.576 ************************************ 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:25.576 * Looking for test storage... 00:15:25.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:25.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.576 --rc genhtml_branch_coverage=1 00:15:25.576 --rc genhtml_function_coverage=1 00:15:25.576 --rc genhtml_legend=1 00:15:25.576 --rc geninfo_all_blocks=1 00:15:25.576 --rc geninfo_unexecuted_blocks=1 00:15:25.576 00:15:25.576 ' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:25.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.576 --rc genhtml_branch_coverage=1 00:15:25.576 --rc genhtml_function_coverage=1 00:15:25.576 --rc genhtml_legend=1 00:15:25.576 --rc geninfo_all_blocks=1 00:15:25.576 --rc geninfo_unexecuted_blocks=1 00:15:25.576 00:15:25.576 ' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:25.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.576 --rc genhtml_branch_coverage=1 00:15:25.576 --rc genhtml_function_coverage=1 00:15:25.576 --rc genhtml_legend=1 00:15:25.576 --rc geninfo_all_blocks=1 00:15:25.576 --rc geninfo_unexecuted_blocks=1 00:15:25.576 00:15:25.576 ' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:25.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.576 --rc genhtml_branch_coverage=1 00:15:25.576 --rc genhtml_function_coverage=1 00:15:25.576 --rc genhtml_legend=1 00:15:25.576 --rc geninfo_all_blocks=1 00:15:25.576 --rc geninfo_unexecuted_blocks=1 00:15:25.576 00:15:25.576 ' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:25.576 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=84281 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 84281 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 84281 ']' 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.577 14:23:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:25.577 [2024-11-29 14:23:04.589247] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:25.577 [2024-11-29 14:23:04.589474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84281 ] 00:15:25.577 [2024-11-29 14:23:04.733851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:25.577 [2024-11-29 14:23:04.771602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.577 [2024-11-29 14:23:04.771727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.577 [2024-11-29 14:23:04.771814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:25.577 { 00:15:25.577 "name": "nvme0n1", 00:15:25.577 "aliases": [ 00:15:25.577 "f94064a3-bca7-437e-b82c-de31c5e9c759" 00:15:25.577 ], 00:15:25.577 "product_name": "NVMe disk", 00:15:25.577 "block_size": 4096, 00:15:25.577 "num_blocks": 1310720, 00:15:25.577 "uuid": "f94064a3-bca7-437e-b82c-de31c5e9c759", 00:15:25.577 "numa_id": -1, 00:15:25.577 "assigned_rate_limits": { 00:15:25.577 "rw_ios_per_sec": 0, 00:15:25.577 "rw_mbytes_per_sec": 0, 00:15:25.577 "r_mbytes_per_sec": 0, 00:15:25.577 "w_mbytes_per_sec": 0 00:15:25.577 }, 00:15:25.577 "claimed": false, 00:15:25.577 "zoned": false, 00:15:25.577 "supported_io_types": { 00:15:25.577 "read": true, 00:15:25.577 "write": true, 00:15:25.577 "unmap": true, 00:15:25.577 "flush": true, 00:15:25.577 "reset": true, 00:15:25.577 "nvme_admin": true, 00:15:25.577 "nvme_io": true, 00:15:25.577 "nvme_io_md": false, 00:15:25.577 "write_zeroes": true, 00:15:25.577 "zcopy": false, 00:15:25.577 "get_zone_info": false, 00:15:25.577 "zone_management": false, 00:15:25.577 "zone_append": false, 00:15:25.577 "compare": true, 00:15:25.577 "compare_and_write": false, 00:15:25.577 "abort": true, 00:15:25.577 "seek_hole": false, 00:15:25.577 "seek_data": false, 00:15:25.577 "copy": true, 00:15:25.577 "nvme_iov_md": false 00:15:25.577 }, 00:15:25.577 "driver_specific": { 00:15:25.577 "nvme": [ 00:15:25.577 { 00:15:25.577 "pci_address": "0000:00:11.0", 00:15:25.577 "trid": { 00:15:25.577 "trtype": "PCIe", 00:15:25.577 "traddr": "0000:00:11.0" 00:15:25.577 }, 00:15:25.577 "ctrlr_data": { 00:15:25.577 "cntlid": 0, 00:15:25.577 "vendor_id": "0x1b36", 00:15:25.577 "model_number": "QEMU NVMe Ctrl", 00:15:25.577 "serial_number": "12341", 00:15:25.577 "firmware_revision": "8.0.0", 00:15:25.577 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:25.577 "oacs": { 00:15:25.577 "security": 0, 00:15:25.577 "format": 1, 00:15:25.577 "firmware": 0, 00:15:25.577 "ns_manage": 1 00:15:25.577 }, 00:15:25.577 "multi_ctrlr": false, 00:15:25.577 "ana_reporting": false 00:15:25.577 }, 00:15:25.577 "vs": { 00:15:25.577 "nvme_version": "1.4" 00:15:25.577 }, 00:15:25.577 "ns_data": { 00:15:25.577 "id": 1, 00:15:25.577 "can_share": false 00:15:25.577 } 00:15:25.577 } 00:15:25.577 ], 00:15:25.577 "mp_policy": "active_passive" 00:15:25.577 } 00:15:25.577 } 00:15:25.577 ]' 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:25.577 14:23:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=49019642-67fe-49ca-81df-c352f5ac4547 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 49019642-67fe-49ca-81df-c352f5ac4547 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad 00:15:25.577 14:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:25.577 { 00:15:25.577 "name": "5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad", 00:15:25.577 "aliases": [ 00:15:25.577 "lvs/nvme0n1p0" 00:15:25.577 ], 00:15:25.577 "product_name": "Logical Volume", 00:15:25.577 "block_size": 4096, 00:15:25.577 "num_blocks": 26476544, 00:15:25.577 "uuid": "5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad", 00:15:25.577 "assigned_rate_limits": { 00:15:25.577 "rw_ios_per_sec": 0, 00:15:25.577 "rw_mbytes_per_sec": 0, 00:15:25.577 "r_mbytes_per_sec": 0, 00:15:25.577 "w_mbytes_per_sec": 0 00:15:25.577 }, 00:15:25.577 "claimed": false, 00:15:25.577 "zoned": false, 00:15:25.577 "supported_io_types": { 00:15:25.577 "read": true, 00:15:25.577 "write": true, 00:15:25.577 "unmap": true, 00:15:25.577 "flush": false, 00:15:25.577 "reset": true, 00:15:25.577 "nvme_admin": false, 00:15:25.577 "nvme_io": false, 00:15:25.577 "nvme_io_md": false, 00:15:25.577 "write_zeroes": true, 00:15:25.577 "zcopy": false, 00:15:25.577 "get_zone_info": false, 00:15:25.577 "zone_management": false, 00:15:25.577 "zone_append": false, 00:15:25.577 "compare": false, 00:15:25.577 "compare_and_write": false, 00:15:25.577 "abort": false, 00:15:25.577 "seek_hole": true, 00:15:25.577 "seek_data": true, 00:15:25.577 "copy": false, 00:15:25.578 "nvme_iov_md": false 00:15:25.578 }, 00:15:25.578 "driver_specific": { 00:15:25.578 "lvol": { 00:15:25.578 "lvol_store_uuid": "49019642-67fe-49ca-81df-c352f5ac4547", 00:15:25.578 "base_bdev": "nvme0n1", 00:15:25.578 "thin_provision": true, 00:15:25.578 "num_allocated_clusters": 0, 00:15:25.578 "snapshot": false, 00:15:25.578 "clone": false, 00:15:25.578 "esnap_clone": false 00:15:25.578 } 00:15:25.578 } 00:15:25.578 } 00:15:25.578 ]' 00:15:25.578 14:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:25.578 14:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:25.578 14:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:25.578 14:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:25.578 14:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:25.578 14:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:25.578 14:23:06 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:25.578 14:23:06 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:25.578 14:23:06 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:25.578 { 00:15:25.578 "name": "5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad", 00:15:25.578 "aliases": [ 00:15:25.578 "lvs/nvme0n1p0" 00:15:25.578 ], 00:15:25.578 "product_name": "Logical Volume", 00:15:25.578 "block_size": 4096, 00:15:25.578 "num_blocks": 26476544, 00:15:25.578 "uuid": "5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad", 00:15:25.578 "assigned_rate_limits": { 00:15:25.578 "rw_ios_per_sec": 0, 00:15:25.578 "rw_mbytes_per_sec": 0, 00:15:25.578 "r_mbytes_per_sec": 0, 00:15:25.578 "w_mbytes_per_sec": 0 00:15:25.578 }, 00:15:25.578 "claimed": false, 00:15:25.578 "zoned": false, 00:15:25.578 "supported_io_types": { 00:15:25.578 "read": true, 00:15:25.578 "write": true, 00:15:25.578 "unmap": true, 00:15:25.578 "flush": false, 00:15:25.578 "reset": true, 00:15:25.578 "nvme_admin": false, 00:15:25.578 "nvme_io": false, 00:15:25.578 "nvme_io_md": false, 00:15:25.578 "write_zeroes": true, 00:15:25.578 "zcopy": false, 00:15:25.578 "get_zone_info": false, 00:15:25.578 "zone_management": false, 00:15:25.578 "zone_append": false, 00:15:25.578 "compare": false, 00:15:25.578 "compare_and_write": false, 00:15:25.578 "abort": false, 00:15:25.578 "seek_hole": true, 00:15:25.578 "seek_data": true, 00:15:25.578 "copy": false, 00:15:25.578 "nvme_iov_md": false 00:15:25.578 }, 00:15:25.578 "driver_specific": { 00:15:25.578 "lvol": { 00:15:25.578 "lvol_store_uuid": "49019642-67fe-49ca-81df-c352f5ac4547", 00:15:25.578 "base_bdev": "nvme0n1", 00:15:25.578 "thin_provision": true, 00:15:25.578 "num_allocated_clusters": 0, 00:15:25.578 "snapshot": false, 00:15:25.578 "clone": false, 00:15:25.578 "esnap_clone": false 00:15:25.578 } 00:15:25.578 } 00:15:25.578 } 00:15:25.578 ]' 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:25.578 14:23:07 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:25.838 14:23:07 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:25.838 14:23:07 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:25.838 14:23:07 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:25.838 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:25.838 14:23:07 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad 00:15:25.838 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad 00:15:25.838 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:25.838 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:25.838 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:25.838 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad 00:15:26.097 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:26.097 { 00:15:26.097 "name": "5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad", 00:15:26.097 "aliases": [ 00:15:26.097 "lvs/nvme0n1p0" 00:15:26.097 ], 00:15:26.097 "product_name": "Logical Volume", 00:15:26.097 "block_size": 4096, 00:15:26.097 "num_blocks": 26476544, 00:15:26.097 "uuid": "5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad", 00:15:26.097 "assigned_rate_limits": { 00:15:26.097 "rw_ios_per_sec": 0, 00:15:26.097 "rw_mbytes_per_sec": 0, 00:15:26.097 "r_mbytes_per_sec": 0, 00:15:26.097 "w_mbytes_per_sec": 0 00:15:26.097 }, 00:15:26.097 "claimed": false, 00:15:26.097 "zoned": false, 00:15:26.097 "supported_io_types": { 00:15:26.097 "read": true, 00:15:26.097 "write": true, 00:15:26.097 "unmap": true, 00:15:26.097 "flush": false, 00:15:26.097 "reset": true, 00:15:26.097 "nvme_admin": false, 00:15:26.097 "nvme_io": false, 00:15:26.097 "nvme_io_md": false, 00:15:26.097 "write_zeroes": true, 00:15:26.097 "zcopy": false, 00:15:26.097 "get_zone_info": false, 00:15:26.097 "zone_management": false, 00:15:26.097 "zone_append": false, 00:15:26.097 "compare": false, 00:15:26.097 "compare_and_write": false, 00:15:26.097 "abort": false, 00:15:26.097 "seek_hole": true, 00:15:26.097 "seek_data": true, 00:15:26.097 "copy": false, 00:15:26.097 "nvme_iov_md": false 00:15:26.097 }, 00:15:26.097 "driver_specific": { 00:15:26.097 "lvol": { 00:15:26.097 "lvol_store_uuid": "49019642-67fe-49ca-81df-c352f5ac4547", 00:15:26.097 "base_bdev": "nvme0n1", 00:15:26.097 "thin_provision": true, 00:15:26.097 "num_allocated_clusters": 0, 00:15:26.097 "snapshot": false, 00:15:26.097 "clone": false, 00:15:26.097 "esnap_clone": false 00:15:26.097 } 00:15:26.097 } 00:15:26.097 } 00:15:26.097 ]' 00:15:26.097 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:26.097 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:26.097 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:26.097 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:26.097 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:26.097 14:23:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:26.097 14:23:07 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:26.097 14:23:07 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:26.097 14:23:07 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad -c nvc0n1p0 --l2p_dram_limit 60 00:15:26.357 [2024-11-29 14:23:08.023371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.357 [2024-11-29 14:23:08.023417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:26.357 [2024-11-29 14:23:08.023429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:26.357 [2024-11-29 14:23:08.023436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.357 [2024-11-29 14:23:08.023487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.357 [2024-11-29 14:23:08.023506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:26.357 [2024-11-29 14:23:08.023523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:26.357 [2024-11-29 14:23:08.023534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.357 [2024-11-29 14:23:08.023568] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:26.357 [2024-11-29 14:23:08.023824] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:26.357 [2024-11-29 14:23:08.023838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.357 [2024-11-29 14:23:08.023852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:26.357 [2024-11-29 14:23:08.023858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:15:26.357 [2024-11-29 14:23:08.023866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.357 [2024-11-29 14:23:08.023893] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a29f4d40-bd7c-4939-9d5e-0d3d6211d2d3 00:15:26.357 [2024-11-29 14:23:08.024877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.357 [2024-11-29 14:23:08.024903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:26.357 [2024-11-29 14:23:08.024914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:15:26.357 [2024-11-29 14:23:08.024920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.357 [2024-11-29 14:23:08.030100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.357 [2024-11-29 14:23:08.030126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:26.358 [2024-11-29 14:23:08.030135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.081 ms 00:15:26.358 [2024-11-29 14:23:08.030141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.358 [2024-11-29 14:23:08.030222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.358 [2024-11-29 14:23:08.030239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:26.358 [2024-11-29 14:23:08.030255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:15:26.358 [2024-11-29 14:23:08.030261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.358 [2024-11-29 14:23:08.030302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.358 [2024-11-29 14:23:08.030309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:26.358 [2024-11-29 14:23:08.030325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:26.358 [2024-11-29 14:23:08.030331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.358 [2024-11-29 14:23:08.030369] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:26.358 [2024-11-29 14:23:08.031681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.358 [2024-11-29 14:23:08.031707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:26.358 [2024-11-29 14:23:08.031715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.317 ms 00:15:26.358 [2024-11-29 14:23:08.031723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.358 [2024-11-29 14:23:08.031752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.358 [2024-11-29 14:23:08.031769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:26.358 [2024-11-29 14:23:08.031776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:26.358 [2024-11-29 14:23:08.031785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.358 [2024-11-29 14:23:08.031807] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:26.358 [2024-11-29 14:23:08.031920] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:26.358 [2024-11-29 14:23:08.031929] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:26.358 [2024-11-29 14:23:08.031938] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:26.358 [2024-11-29 14:23:08.031946] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:26.358 [2024-11-29 14:23:08.031954] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:26.358 [2024-11-29 14:23:08.031960] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:26.358 [2024-11-29 14:23:08.031969] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:26.358 [2024-11-29 14:23:08.031975] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:26.358 [2024-11-29 14:23:08.031984] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:26.358 [2024-11-29 14:23:08.031991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.358 [2024-11-29 14:23:08.031999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:26.358 [2024-11-29 14:23:08.032004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:15:26.358 [2024-11-29 14:23:08.032011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.358 [2024-11-29 14:23:08.032088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.358 [2024-11-29 14:23:08.032097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:26.358 [2024-11-29 14:23:08.032104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:26.358 [2024-11-29 14:23:08.032110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.358 [2024-11-29 14:23:08.032215] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:26.358 [2024-11-29 14:23:08.032224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:26.358 [2024-11-29 14:23:08.032230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:26.358 [2024-11-29 14:23:08.032237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.358 [2024-11-29 14:23:08.032243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:26.358 [2024-11-29 14:23:08.032250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:26.358 [2024-11-29 14:23:08.032255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:26.358 [2024-11-29 14:23:08.032261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:26.358 [2024-11-29 14:23:08.032266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:26.358 [2024-11-29 14:23:08.032272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:26.358 [2024-11-29 14:23:08.032281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:26.358 [2024-11-29 14:23:08.032288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:26.358 [2024-11-29 14:23:08.032293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:26.358 [2024-11-29 14:23:08.032301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:26.358 [2024-11-29 14:23:08.032306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:26.358 [2024-11-29 14:23:08.032313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.358 [2024-11-29 14:23:08.032319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:26.358 [2024-11-29 14:23:08.032326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:26.358 [2024-11-29 14:23:08.032331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.358 [2024-11-29 14:23:08.032339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:26.358 [2024-11-29 14:23:08.032355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:26.358 [2024-11-29 14:23:08.032362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:26.358 [2024-11-29 14:23:08.032368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:26.358 [2024-11-29 14:23:08.032375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:26.358 [2024-11-29 14:23:08.032381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:26.358 [2024-11-29 14:23:08.032388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:26.358 [2024-11-29 14:23:08.032394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:26.358 [2024-11-29 14:23:08.032401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:26.358 [2024-11-29 14:23:08.032406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:26.358 [2024-11-29 14:23:08.032415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:26.358 [2024-11-29 14:23:08.032420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:26.358 [2024-11-29 14:23:08.032427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:26.358 [2024-11-29 14:23:08.032434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:26.358 [2024-11-29 14:23:08.032440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:26.358 [2024-11-29 14:23:08.032446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:26.358 [2024-11-29 14:23:08.032453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:26.358 [2024-11-29 14:23:08.032458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:26.358 [2024-11-29 14:23:08.032465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:26.358 [2024-11-29 14:23:08.032471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:26.358 [2024-11-29 14:23:08.032478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.358 [2024-11-29 14:23:08.032484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:26.358 [2024-11-29 14:23:08.032501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:26.358 [2024-11-29 14:23:08.032508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.358 [2024-11-29 14:23:08.032516] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:26.358 [2024-11-29 14:23:08.032522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:26.358 [2024-11-29 14:23:08.032532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:26.358 [2024-11-29 14:23:08.032538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.358 [2024-11-29 14:23:08.032546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:26.358 [2024-11-29 14:23:08.032553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:26.358 [2024-11-29 14:23:08.032560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:26.358 [2024-11-29 14:23:08.032566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:26.358 [2024-11-29 14:23:08.032577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:26.358 [2024-11-29 14:23:08.032583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:26.358 [2024-11-29 14:23:08.032593] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:26.358 [2024-11-29 14:23:08.032603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:26.358 [2024-11-29 14:23:08.032611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:26.358 [2024-11-29 14:23:08.032617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:26.358 [2024-11-29 14:23:08.032625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:26.358 [2024-11-29 14:23:08.032631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:26.358 [2024-11-29 14:23:08.032639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:26.358 [2024-11-29 14:23:08.032645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:26.358 [2024-11-29 14:23:08.032653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:26.358 [2024-11-29 14:23:08.032660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:26.358 [2024-11-29 14:23:08.032668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:26.358 [2024-11-29 14:23:08.032674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:26.359 [2024-11-29 14:23:08.032682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:26.359 [2024-11-29 14:23:08.032688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:26.359 [2024-11-29 14:23:08.032695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:26.359 [2024-11-29 14:23:08.032701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:26.359 [2024-11-29 14:23:08.032708] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:26.359 [2024-11-29 14:23:08.032715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:26.359 [2024-11-29 14:23:08.032722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:26.359 [2024-11-29 14:23:08.032727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:26.359 [2024-11-29 14:23:08.032734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:26.359 [2024-11-29 14:23:08.032741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:26.359 [2024-11-29 14:23:08.032748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.359 [2024-11-29 14:23:08.032753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:26.359 [2024-11-29 14:23:08.032762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:15:26.359 [2024-11-29 14:23:08.032775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.359 [2024-11-29 14:23:08.032843] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:26.359 [2024-11-29 14:23:08.032850] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:28.893 [2024-11-29 14:23:10.579996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.893 [2024-11-29 14:23:10.580051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:28.893 [2024-11-29 14:23:10.580064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2547.134 ms 00:15:28.893 [2024-11-29 14:23:10.580072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.893 [2024-11-29 14:23:10.598865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.893 [2024-11-29 14:23:10.598911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:28.893 [2024-11-29 14:23:10.598926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.712 ms 00:15:28.893 [2024-11-29 14:23:10.598934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.893 [2024-11-29 14:23:10.599050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.893 [2024-11-29 14:23:10.599060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:28.893 [2024-11-29 14:23:10.599070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:15:28.893 [2024-11-29 14:23:10.599077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.893 [2024-11-29 14:23:10.608603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.893 [2024-11-29 14:23:10.608644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:28.893 [2024-11-29 14:23:10.608660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.449 ms 00:15:28.893 [2024-11-29 14:23:10.608669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.893 [2024-11-29 14:23:10.608719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.893 [2024-11-29 14:23:10.608730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:28.893 [2024-11-29 14:23:10.608755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:28.893 [2024-11-29 14:23:10.608765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.893 [2024-11-29 14:23:10.609159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.893 [2024-11-29 14:23:10.609187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:28.893 [2024-11-29 14:23:10.609200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:15:28.893 [2024-11-29 14:23:10.609209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.893 [2024-11-29 14:23:10.609365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.893 [2024-11-29 14:23:10.609378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:28.893 [2024-11-29 14:23:10.609393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:15:28.893 [2024-11-29 14:23:10.609404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.893 [2024-11-29 14:23:10.615603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.893 [2024-11-29 14:23:10.615629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:28.893 [2024-11-29 14:23:10.615639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.147 ms 00:15:28.893 [2024-11-29 14:23:10.615654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.893 [2024-11-29 14:23:10.622278] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:28.893 [2024-11-29 14:23:10.635129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.893 [2024-11-29 14:23:10.635169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:28.893 [2024-11-29 14:23:10.635177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.404 ms 00:15:28.893 [2024-11-29 14:23:10.635184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.893 [2024-11-29 14:23:10.675976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.893 [2024-11-29 14:23:10.676099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:28.893 [2024-11-29 14:23:10.676136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.761 ms 00:15:28.893 [2024-11-29 14:23:10.676170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.893 [2024-11-29 14:23:10.676731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.893 [2024-11-29 14:23:10.676804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:28.893 [2024-11-29 14:23:10.676835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:15:28.893 [2024-11-29 14:23:10.676864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.893 [2024-11-29 14:23:10.682388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.893 [2024-11-29 14:23:10.682480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:28.893 [2024-11-29 14:23:10.682568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.362 ms 00:15:28.893 [2024-11-29 14:23:10.682597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.153 [2024-11-29 14:23:10.687584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.153 [2024-11-29 14:23:10.687614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:29.153 [2024-11-29 14:23:10.687622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.887 ms 00:15:29.153 [2024-11-29 14:23:10.687630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.153 [2024-11-29 14:23:10.687880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.153 [2024-11-29 14:23:10.687893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:29.153 [2024-11-29 14:23:10.687900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:15:29.153 [2024-11-29 14:23:10.687916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.153 [2024-11-29 14:23:10.710921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.153 [2024-11-29 14:23:10.710954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:29.153 [2024-11-29 14:23:10.710964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.973 ms 00:15:29.153 [2024-11-29 14:23:10.710972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.153 [2024-11-29 14:23:10.714421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.153 [2024-11-29 14:23:10.714452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:29.153 [2024-11-29 14:23:10.714461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.382 ms 00:15:29.153 [2024-11-29 14:23:10.714470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.153 [2024-11-29 14:23:10.716804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.153 [2024-11-29 14:23:10.716831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:29.153 [2024-11-29 14:23:10.716838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.289 ms 00:15:29.153 [2024-11-29 14:23:10.716844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.153 [2024-11-29 14:23:10.719414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.153 [2024-11-29 14:23:10.719448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:29.153 [2024-11-29 14:23:10.719456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.530 ms 00:15:29.153 [2024-11-29 14:23:10.719465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.153 [2024-11-29 14:23:10.719520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.153 [2024-11-29 14:23:10.719530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:29.153 [2024-11-29 14:23:10.719537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:29.153 [2024-11-29 14:23:10.719544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.153 [2024-11-29 14:23:10.719611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.153 [2024-11-29 14:23:10.719620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:29.153 [2024-11-29 14:23:10.719626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:29.153 [2024-11-29 14:23:10.719635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.153 [2024-11-29 14:23:10.720439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2696.723 ms, result 0 00:15:29.153 { 00:15:29.153 "name": "ftl0", 00:15:29.153 "uuid": "a29f4d40-bd7c-4939-9d5e-0d3d6211d2d3" 00:15:29.153 } 00:15:29.153 14:23:10 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:29.153 14:23:10 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:29.153 14:23:10 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:29.153 14:23:10 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:29.153 14:23:10 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:29.153 14:23:10 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:29.153 14:23:10 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:29.412 14:23:10 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:29.412 [ 00:15:29.412 { 00:15:29.412 "name": "ftl0", 00:15:29.412 "aliases": [ 00:15:29.412 "a29f4d40-bd7c-4939-9d5e-0d3d6211d2d3" 00:15:29.412 ], 00:15:29.412 "product_name": "FTL disk", 00:15:29.412 "block_size": 4096, 00:15:29.412 "num_blocks": 20971520, 00:15:29.412 "uuid": "a29f4d40-bd7c-4939-9d5e-0d3d6211d2d3", 00:15:29.412 "assigned_rate_limits": { 00:15:29.412 "rw_ios_per_sec": 0, 00:15:29.412 "rw_mbytes_per_sec": 0, 00:15:29.412 "r_mbytes_per_sec": 0, 00:15:29.412 "w_mbytes_per_sec": 0 00:15:29.412 }, 00:15:29.412 "claimed": false, 00:15:29.412 "zoned": false, 00:15:29.412 "supported_io_types": { 00:15:29.412 "read": true, 00:15:29.412 "write": true, 00:15:29.412 "unmap": true, 00:15:29.412 "flush": true, 00:15:29.412 "reset": false, 00:15:29.412 "nvme_admin": false, 00:15:29.412 "nvme_io": false, 00:15:29.412 "nvme_io_md": false, 00:15:29.412 "write_zeroes": true, 00:15:29.412 "zcopy": false, 00:15:29.412 "get_zone_info": false, 00:15:29.412 "zone_management": false, 00:15:29.412 "zone_append": false, 00:15:29.412 "compare": false, 00:15:29.412 "compare_and_write": false, 00:15:29.412 "abort": false, 00:15:29.412 "seek_hole": false, 00:15:29.412 "seek_data": false, 00:15:29.412 "copy": false, 00:15:29.412 "nvme_iov_md": false 00:15:29.412 }, 00:15:29.412 "driver_specific": { 00:15:29.412 "ftl": { 00:15:29.412 "base_bdev": "5e92ecc9-054a-4bbd-9ffa-a4d27ac3f2ad", 00:15:29.412 "cache": "nvc0n1p0" 00:15:29.412 } 00:15:29.412 } 00:15:29.412 } 00:15:29.412 ] 00:15:29.412 14:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:29.412 14:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:29.412 14:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:29.671 14:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:29.671 14:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:29.931 [2024-11-29 14:23:11.561689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.931 [2024-11-29 14:23:11.561720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:29.932 [2024-11-29 14:23:11.561730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:29.932 [2024-11-29 14:23:11.561736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.932 [2024-11-29 14:23:11.561764] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:29.932 [2024-11-29 14:23:11.562171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.932 [2024-11-29 14:23:11.562188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:29.932 [2024-11-29 14:23:11.562195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:15:29.932 [2024-11-29 14:23:11.562204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.932 [2024-11-29 14:23:11.562635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.932 [2024-11-29 14:23:11.562645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:29.932 [2024-11-29 14:23:11.562662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:15:29.932 [2024-11-29 14:23:11.562670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.932 [2024-11-29 14:23:11.565108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.932 [2024-11-29 14:23:11.565139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:29.932 [2024-11-29 14:23:11.565146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.409 ms 00:15:29.932 [2024-11-29 14:23:11.565153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.932 [2024-11-29 14:23:11.569748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.932 [2024-11-29 14:23:11.569775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:29.932 [2024-11-29 14:23:11.569783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.571 ms 00:15:29.932 [2024-11-29 14:23:11.569791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.932 [2024-11-29 14:23:11.571083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.932 [2024-11-29 14:23:11.571119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:29.932 [2024-11-29 14:23:11.571127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:15:29.932 [2024-11-29 14:23:11.571133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.932 [2024-11-29 14:23:11.575213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.932 [2024-11-29 14:23:11.575246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:29.932 [2024-11-29 14:23:11.575254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.039 ms 00:15:29.932 [2024-11-29 14:23:11.575263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.932 [2024-11-29 14:23:11.575409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.932 [2024-11-29 14:23:11.575418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:29.932 [2024-11-29 14:23:11.575434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:15:29.932 [2024-11-29 14:23:11.575449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.932 [2024-11-29 14:23:11.576908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.932 [2024-11-29 14:23:11.576938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:29.932 [2024-11-29 14:23:11.576945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:15:29.932 [2024-11-29 14:23:11.576953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.932 [2024-11-29 14:23:11.577979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.932 [2024-11-29 14:23:11.578013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:29.932 [2024-11-29 14:23:11.578020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:15:29.932 [2024-11-29 14:23:11.578027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.932 [2024-11-29 14:23:11.578816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.932 [2024-11-29 14:23:11.578846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:29.932 [2024-11-29 14:23:11.578853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:15:29.932 [2024-11-29 14:23:11.578862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.932 [2024-11-29 14:23:11.579677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.932 [2024-11-29 14:23:11.579706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:29.932 [2024-11-29 14:23:11.579713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:15:29.932 [2024-11-29 14:23:11.579720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.932 [2024-11-29 14:23:11.579751] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:29.932 [2024-11-29 14:23:11.579764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.579998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.580004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.580011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.580016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.580023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.580028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.580035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.580040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.580047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.580052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.580060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.580066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.580073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:29.932 [2024-11-29 14:23:11.580078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:29.933 [2024-11-29 14:23:11.580427] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:29.933 [2024-11-29 14:23:11.580433] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a29f4d40-bd7c-4939-9d5e-0d3d6211d2d3 00:15:29.933 [2024-11-29 14:23:11.580441] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:29.933 [2024-11-29 14:23:11.580448] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:29.933 [2024-11-29 14:23:11.580454] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:29.933 [2024-11-29 14:23:11.580460] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:29.933 [2024-11-29 14:23:11.580467] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:29.933 [2024-11-29 14:23:11.580473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:29.933 [2024-11-29 14:23:11.580479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:29.933 [2024-11-29 14:23:11.580484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:29.933 [2024-11-29 14:23:11.580507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:29.933 [2024-11-29 14:23:11.580513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.933 [2024-11-29 14:23:11.580520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:29.933 [2024-11-29 14:23:11.580526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:15:29.933 [2024-11-29 14:23:11.580533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.933 [2024-11-29 14:23:11.581913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.933 [2024-11-29 14:23:11.581935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:29.933 [2024-11-29 14:23:11.581942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:15:29.933 [2024-11-29 14:23:11.581949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.933 [2024-11-29 14:23:11.582037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:29.933 [2024-11-29 14:23:11.582046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:29.933 [2024-11-29 14:23:11.582052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:29.933 [2024-11-29 14:23:11.582059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.933 [2024-11-29 14:23:11.586885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.933 [2024-11-29 14:23:11.586914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:29.933 [2024-11-29 14:23:11.586921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.933 [2024-11-29 14:23:11.586929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.933 [2024-11-29 14:23:11.586981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.933 [2024-11-29 14:23:11.586989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:29.933 [2024-11-29 14:23:11.586995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.933 [2024-11-29 14:23:11.587002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.933 [2024-11-29 14:23:11.587056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.933 [2024-11-29 14:23:11.587066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:29.933 [2024-11-29 14:23:11.587072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.933 [2024-11-29 14:23:11.587079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.933 [2024-11-29 14:23:11.587100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.933 [2024-11-29 14:23:11.587108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:29.933 [2024-11-29 14:23:11.587113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.933 [2024-11-29 14:23:11.587120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.933 [2024-11-29 14:23:11.595811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.933 [2024-11-29 14:23:11.595847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:29.933 [2024-11-29 14:23:11.595857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.933 [2024-11-29 14:23:11.595865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.933 [2024-11-29 14:23:11.603014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.933 [2024-11-29 14:23:11.603051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:29.933 [2024-11-29 14:23:11.603059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.933 [2024-11-29 14:23:11.603067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.933 [2024-11-29 14:23:11.603135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.933 [2024-11-29 14:23:11.603160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:29.933 [2024-11-29 14:23:11.603167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.933 [2024-11-29 14:23:11.603174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.934 [2024-11-29 14:23:11.603222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.934 [2024-11-29 14:23:11.603231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:29.934 [2024-11-29 14:23:11.603238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.934 [2024-11-29 14:23:11.603245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.934 [2024-11-29 14:23:11.603311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.934 [2024-11-29 14:23:11.603325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:29.934 [2024-11-29 14:23:11.603333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.934 [2024-11-29 14:23:11.603348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.934 [2024-11-29 14:23:11.603388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.934 [2024-11-29 14:23:11.603396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:29.934 [2024-11-29 14:23:11.603402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.934 [2024-11-29 14:23:11.603409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.934 [2024-11-29 14:23:11.603447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.934 [2024-11-29 14:23:11.603456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:29.934 [2024-11-29 14:23:11.603464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.934 [2024-11-29 14:23:11.603471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.934 [2024-11-29 14:23:11.603525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:29.934 [2024-11-29 14:23:11.603535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:29.934 [2024-11-29 14:23:11.603541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:29.934 [2024-11-29 14:23:11.603548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:29.934 [2024-11-29 14:23:11.603702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 42.002 ms, result 0 00:15:29.934 true 00:15:29.934 14:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 84281 00:15:29.934 14:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 84281 ']' 00:15:29.934 14:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 84281 00:15:29.934 14:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:29.934 14:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.934 14:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84281 00:15:29.934 14:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:29.934 14:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:29.934 killing process with pid 84281 00:15:29.934 14:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84281' 00:15:29.934 14:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 84281 00:15:29.934 14:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 84281 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:36.508 14:23:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:36.508 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:36.508 fio-3.35 00:15:36.508 Starting 1 thread 00:15:41.796 00:15:41.796 test: (groupid=0, jobs=1): err= 0: pid=84446: Fri Nov 29 14:23:22 2024 00:15:41.796 read: IOPS=835, BW=55.5MiB/s (58.2MB/s)(255MiB/4587msec) 00:15:41.796 slat (nsec): min=4158, max=42807, avg=7564.78, stdev=3786.80 00:15:41.796 clat (usec): min=252, max=2529, avg=539.92, stdev=211.16 00:15:41.796 lat (usec): min=256, max=2538, avg=547.48, stdev=213.32 00:15:41.796 clat percentiles (usec): 00:15:41.796 | 1.00th=[ 302], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 367], 00:15:41.796 | 30.00th=[ 404], 40.00th=[ 441], 50.00th=[ 494], 60.00th=[ 529], 00:15:41.796 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 898], 95.00th=[ 963], 00:15:41.796 | 99.00th=[ 1090], 99.50th=[ 1172], 99.90th=[ 1631], 99.95th=[ 2409], 00:15:41.796 | 99.99th=[ 2540] 00:15:41.796 write: IOPS=841, BW=55.9MiB/s (58.6MB/s)(256MiB/4583msec); 0 zone resets 00:15:41.796 slat (nsec): min=14486, max=74485, avg=21889.25, stdev=5639.03 00:15:41.796 clat (usec): min=303, max=2296, avg=610.40, stdev=227.66 00:15:41.796 lat (usec): min=321, max=2312, avg=632.29, stdev=230.77 00:15:41.796 clat percentiles (usec): 00:15:41.796 | 1.00th=[ 322], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 424], 00:15:41.796 | 30.00th=[ 490], 40.00th=[ 502], 50.00th=[ 553], 60.00th=[ 603], 00:15:41.796 | 70.00th=[ 668], 80.00th=[ 791], 90.00th=[ 996], 95.00th=[ 1020], 00:15:41.796 | 99.00th=[ 1221], 99.50th=[ 1303], 99.90th=[ 1778], 99.95th=[ 2008], 00:15:41.796 | 99.99th=[ 2311] 00:15:41.796 bw ( KiB/s): min=36040, max=76024, per=99.47%, avg=56908.44, stdev=15584.48, samples=9 00:15:41.796 iops : min= 530, max= 1118, avg=836.89, stdev=229.18, samples=9 00:15:41.796 lat (usec) : 500=45.88%, 750=35.06%, 1000=14.33% 00:15:41.796 lat (msec) : 2=4.67%, 4=0.05% 00:15:41.796 cpu : usr=99.04%, sys=0.13%, ctx=13, majf=0, minf=1181 00:15:41.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:41.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.796 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:41.796 00:15:41.796 Run status group 0 (all jobs): 00:15:41.796 READ: bw=55.5MiB/s (58.2MB/s), 55.5MiB/s-55.5MiB/s (58.2MB/s-58.2MB/s), io=255MiB (267MB), run=4587-4587msec 00:15:41.796 WRITE: bw=55.9MiB/s (58.6MB/s), 55.9MiB/s-55.9MiB/s (58.6MB/s-58.6MB/s), io=256MiB (269MB), run=4583-4583msec 00:15:42.055 ----------------------------------------------------- 00:15:42.055 Suppressions used: 00:15:42.055 count bytes template 00:15:42.055 1 5 /usr/src/fio/parse.c 00:15:42.055 1 8 libtcmalloc_minimal.so 00:15:42.055 1 904 libcrypto.so 00:15:42.055 ----------------------------------------------------- 00:15:42.055 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:42.055 14:23:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:42.317 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:42.317 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:42.317 fio-3.35 00:15:42.317 Starting 2 threads 00:16:08.969 00:16:08.969 first_half: (groupid=0, jobs=1): err= 0: pid=84549: Fri Nov 29 14:23:47 2024 00:16:08.969 read: IOPS=2925, BW=11.4MiB/s (12.0MB/s)(255MiB/22349msec) 00:16:08.969 slat (nsec): min=3007, max=35835, avg=4088.15, stdev=1035.58 00:16:08.969 clat (usec): min=668, max=305888, avg=34562.88, stdev=19772.67 00:16:08.969 lat (usec): min=672, max=305893, avg=34566.97, stdev=19772.69 00:16:08.969 clat percentiles (msec): 00:16:08.969 | 1.00th=[ 18], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 29], 00:16:08.969 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:16:08.969 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 45], 00:16:08.969 | 99.00th=[ 138], 99.50th=[ 167], 99.90th=[ 253], 99.95th=[ 271], 00:16:08.969 | 99.99th=[ 300] 00:16:08.969 write: IOPS=3118, BW=12.2MiB/s (12.8MB/s)(256MiB/21014msec); 0 zone resets 00:16:08.969 slat (usec): min=3, max=1238, avg= 5.51, stdev= 6.32 00:16:08.969 clat (usec): min=358, max=92325, avg=9140.56, stdev=15912.86 00:16:08.969 lat (usec): min=365, max=92331, avg=9146.08, stdev=15912.98 00:16:08.969 clat percentiles (usec): 00:16:08.969 | 1.00th=[ 676], 5.00th=[ 898], 10.00th=[ 1287], 20.00th=[ 2474], 00:16:08.969 | 30.00th=[ 3556], 40.00th=[ 4490], 50.00th=[ 5145], 60.00th=[ 5669], 00:16:08.969 | 70.00th=[ 6652], 80.00th=[ 8586], 90.00th=[10945], 95.00th=[58459], 00:16:08.969 | 99.00th=[83362], 99.50th=[86508], 99.90th=[89654], 99.95th=[90702], 00:16:08.969 | 99.99th=[91751] 00:16:08.969 bw ( KiB/s): min= 424, max=45424, per=84.05%, avg=20971.52, stdev=14270.63, samples=25 00:16:08.969 iops : min= 106, max=11356, avg=5242.88, stdev=3567.66, samples=25 00:16:08.969 lat (usec) : 500=0.04%, 750=1.15%, 1000=2.07% 00:16:08.969 lat (msec) : 2=5.33%, 4=8.70%, 10=26.17%, 20=3.69%, 50=47.72% 00:16:08.969 lat (msec) : 100=4.04%, 250=1.04%, 500=0.05% 00:16:08.969 cpu : usr=99.31%, sys=0.08%, ctx=48, majf=0, minf=5587 00:16:08.969 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:08.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.969 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.970 issued rwts: total=65392,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.970 second_half: (groupid=0, jobs=1): err= 0: pid=84550: Fri Nov 29 14:23:47 2024 00:16:08.970 read: IOPS=2903, BW=11.3MiB/s (11.9MB/s)(255MiB/22473msec) 00:16:08.970 slat (usec): min=2, max=167, avg= 4.73, stdev= 1.43 00:16:08.970 clat (usec): min=567, max=289304, avg=34352.64, stdev=20203.52 00:16:08.970 lat (usec): min=574, max=289310, avg=34357.37, stdev=20203.51 00:16:08.970 clat percentiles (msec): 00:16:08.970 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 29], 00:16:08.970 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:16:08.970 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 52], 00:16:08.970 | 99.00th=[ 146], 99.50th=[ 163], 99.90th=[ 199], 99.95th=[ 209], 00:16:08.970 | 99.99th=[ 279] 00:16:08.970 write: IOPS=3804, BW=14.9MiB/s (15.6MB/s)(256MiB/17226msec); 0 zone resets 00:16:08.970 slat (usec): min=3, max=206, avg= 6.53, stdev= 2.84 00:16:08.970 clat (usec): min=370, max=93215, avg=9669.16, stdev=16655.94 00:16:08.970 lat (usec): min=378, max=93222, avg=9675.69, stdev=16656.14 00:16:08.970 clat percentiles (usec): 00:16:08.970 | 1.00th=[ 676], 5.00th=[ 783], 10.00th=[ 881], 20.00th=[ 1205], 00:16:08.970 | 30.00th=[ 1778], 40.00th=[ 3195], 50.00th=[ 4424], 60.00th=[ 5407], 00:16:08.970 | 70.00th=[ 7832], 80.00th=[10159], 90.00th=[24773], 95.00th=[57934], 00:16:08.970 | 99.00th=[83362], 99.50th=[86508], 99.90th=[90702], 99.95th=[91751], 00:16:08.970 | 99.99th=[92799] 00:16:08.970 bw ( KiB/s): min= 888, max=41816, per=95.53%, avg=23835.00, stdev=13682.97, samples=22 00:16:08.970 iops : min= 222, max=10454, avg=5958.73, stdev=3420.71, samples=22 00:16:08.970 lat (usec) : 500=0.02%, 750=1.69%, 1000=5.57% 00:16:08.970 lat (msec) : 2=8.63%, 4=7.28%, 10=17.63%, 20=5.73%, 50=48.02% 00:16:08.970 lat (msec) : 100=4.14%, 250=1.29%, 500=0.01% 00:16:08.970 cpu : usr=99.35%, sys=0.12%, ctx=36, majf=0, minf=5553 00:16:08.970 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:08.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.970 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.970 issued rwts: total=65253,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.970 00:16:08.970 Run status group 0 (all jobs): 00:16:08.970 READ: bw=22.7MiB/s (23.8MB/s), 11.3MiB/s-11.4MiB/s (11.9MB/s-12.0MB/s), io=510MiB (535MB), run=22349-22473msec 00:16:08.970 WRITE: bw=24.4MiB/s (25.5MB/s), 12.2MiB/s-14.9MiB/s (12.8MB/s-15.6MB/s), io=512MiB (537MB), run=17226-21014msec 00:16:08.970 ----------------------------------------------------- 00:16:08.970 Suppressions used: 00:16:08.970 count bytes template 00:16:08.970 2 10 /usr/src/fio/parse.c 00:16:08.970 5 480 /usr/src/fio/iolog.c 00:16:08.970 1 8 libtcmalloc_minimal.so 00:16:08.970 1 904 libcrypto.so 00:16:08.970 ----------------------------------------------------- 00:16:08.970 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:08.970 14:23:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:08.970 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:08.970 fio-3.35 00:16:08.970 Starting 1 thread 00:16:23.852 00:16:23.852 test: (groupid=0, jobs=1): err= 0: pid=84840: Fri Nov 29 14:24:03 2024 00:16:23.852 read: IOPS=7004, BW=27.4MiB/s (28.7MB/s)(255MiB/9308msec) 00:16:23.852 slat (nsec): min=2933, max=35915, avg=4695.86, stdev=1139.39 00:16:23.852 clat (usec): min=522, max=38169, avg=18263.93, stdev=2936.65 00:16:23.852 lat (usec): min=526, max=38172, avg=18268.63, stdev=2936.62 00:16:23.852 clat percentiles (usec): 00:16:23.852 | 1.00th=[15008], 5.00th=[15270], 10.00th=[15401], 20.00th=[15664], 00:16:23.852 | 30.00th=[15926], 40.00th=[16319], 50.00th=[17171], 60.00th=[19006], 00:16:23.852 | 70.00th=[19792], 80.00th=[20841], 90.00th=[22152], 95.00th=[23462], 00:16:23.852 | 99.00th=[27132], 99.50th=[28181], 99.90th=[32637], 99.95th=[33817], 00:16:23.852 | 99.99th=[36963] 00:16:23.852 write: IOPS=12.4k, BW=48.5MiB/s (50.9MB/s)(256MiB/5278msec); 0 zone resets 00:16:23.852 slat (usec): min=4, max=367, avg= 6.60, stdev= 3.97 00:16:23.852 clat (usec): min=483, max=65874, avg=10264.29, stdev=11591.28 00:16:23.852 lat (usec): min=489, max=65880, avg=10270.89, stdev=11591.26 00:16:23.852 clat percentiles (usec): 00:16:23.852 | 1.00th=[ 685], 5.00th=[ 824], 10.00th=[ 922], 20.00th=[ 1139], 00:16:23.852 | 30.00th=[ 1434], 40.00th=[ 2212], 50.00th=[ 7963], 60.00th=[ 9634], 00:16:23.852 | 70.00th=[11469], 80.00th=[13698], 90.00th=[30016], 95.00th=[37487], 00:16:23.852 | 99.00th=[46924], 99.50th=[50594], 99.90th=[55313], 99.95th=[57934], 00:16:23.852 | 99.99th=[64750] 00:16:23.852 bw ( KiB/s): min=22048, max=62096, per=95.95%, avg=47655.27, stdev=11461.24, samples=11 00:16:23.852 iops : min= 5512, max=15524, avg=11913.82, stdev=2865.31, samples=11 00:16:23.852 lat (usec) : 500=0.01%, 750=1.17%, 1000=5.81% 00:16:23.852 lat (msec) : 2=12.55%, 4=1.55%, 10=10.22%, 20=46.53%, 50=21.88% 00:16:23.852 lat (msec) : 100=0.28% 00:16:23.852 cpu : usr=99.09%, sys=0.12%, ctx=20, majf=0, minf=5577 00:16:23.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:23.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.852 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:23.852 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.852 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:23.852 00:16:23.852 Run status group 0 (all jobs): 00:16:23.852 READ: bw=27.4MiB/s (28.7MB/s), 27.4MiB/s-27.4MiB/s (28.7MB/s-28.7MB/s), io=255MiB (267MB), run=9308-9308msec 00:16:23.852 WRITE: bw=48.5MiB/s (50.9MB/s), 48.5MiB/s-48.5MiB/s (50.9MB/s-50.9MB/s), io=256MiB (268MB), run=5278-5278msec 00:16:23.852 ----------------------------------------------------- 00:16:23.852 Suppressions used: 00:16:23.852 count bytes template 00:16:23.852 1 5 /usr/src/fio/parse.c 00:16:23.852 2 192 /usr/src/fio/iolog.c 00:16:23.852 1 8 libtcmalloc_minimal.so 00:16:23.852 1 904 libcrypto.so 00:16:23.852 ----------------------------------------------------- 00:16:23.852 00:16:23.852 14:24:04 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:23.852 14:24:04 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:23.852 14:24:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:23.852 14:24:04 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:23.852 Remove shared memory files 00:16:23.852 14:24:04 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:23.852 14:24:04 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:23.852 14:24:04 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:23.852 14:24:04 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:23.852 14:24:04 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69810 /dev/shm/spdk_tgt_trace.pid83223 00:16:23.852 14:24:04 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:23.852 14:24:04 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:23.852 00:16:23.852 real 1m0.585s 00:16:23.852 user 2m10.716s 00:16:23.852 sys 0m2.721s 00:16:23.852 14:24:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:23.852 14:24:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:23.853 ************************************ 00:16:23.853 END TEST ftl_fio_basic 00:16:23.853 ************************************ 00:16:23.853 14:24:04 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:23.853 14:24:04 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:23.853 14:24:04 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:23.853 14:24:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:23.853 ************************************ 00:16:23.853 START TEST ftl_bdevperf 00:16:23.853 ************************************ 00:16:23.853 14:24:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:23.853 * Looking for test storage... 00:16:23.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:23.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.853 --rc genhtml_branch_coverage=1 00:16:23.853 --rc genhtml_function_coverage=1 00:16:23.853 --rc genhtml_legend=1 00:16:23.853 --rc geninfo_all_blocks=1 00:16:23.853 --rc geninfo_unexecuted_blocks=1 00:16:23.853 00:16:23.853 ' 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:23.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.853 --rc genhtml_branch_coverage=1 00:16:23.853 --rc genhtml_function_coverage=1 00:16:23.853 --rc genhtml_legend=1 00:16:23.853 --rc geninfo_all_blocks=1 00:16:23.853 --rc geninfo_unexecuted_blocks=1 00:16:23.853 00:16:23.853 ' 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:23.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.853 --rc genhtml_branch_coverage=1 00:16:23.853 --rc genhtml_function_coverage=1 00:16:23.853 --rc genhtml_legend=1 00:16:23.853 --rc geninfo_all_blocks=1 00:16:23.853 --rc geninfo_unexecuted_blocks=1 00:16:23.853 00:16:23.853 ' 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:23.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.853 --rc genhtml_branch_coverage=1 00:16:23.853 --rc genhtml_function_coverage=1 00:16:23.853 --rc genhtml_legend=1 00:16:23.853 --rc geninfo_all_blocks=1 00:16:23.853 --rc geninfo_unexecuted_blocks=1 00:16:23.853 00:16:23.853 ' 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=85078 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 85078 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 85078 ']' 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:23.853 14:24:05 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:23.853 [2024-11-29 14:24:05.230688] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:23.853 [2024-11-29 14:24:05.231630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85078 ] 00:16:23.853 [2024-11-29 14:24:05.382361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.853 [2024-11-29 14:24:05.455888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.426 14:24:06 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:24.426 14:24:06 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:24.426 14:24:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:24.426 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:24.426 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:24.426 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:24.426 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:24.426 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:24.687 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:24.687 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:24.687 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:24.687 14:24:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:24.687 14:24:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:24.687 14:24:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:24.687 14:24:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:24.687 14:24:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:24.946 14:24:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:24.946 { 00:16:24.946 "name": "nvme0n1", 00:16:24.946 "aliases": [ 00:16:24.946 "1335c693-ca98-4daf-8595-5aacecc87fba" 00:16:24.946 ], 00:16:24.946 "product_name": "NVMe disk", 00:16:24.946 "block_size": 4096, 00:16:24.946 "num_blocks": 1310720, 00:16:24.946 "uuid": "1335c693-ca98-4daf-8595-5aacecc87fba", 00:16:24.946 "numa_id": -1, 00:16:24.946 "assigned_rate_limits": { 00:16:24.946 "rw_ios_per_sec": 0, 00:16:24.946 "rw_mbytes_per_sec": 0, 00:16:24.946 "r_mbytes_per_sec": 0, 00:16:24.946 "w_mbytes_per_sec": 0 00:16:24.946 }, 00:16:24.946 "claimed": true, 00:16:24.946 "claim_type": "read_many_write_one", 00:16:24.946 "zoned": false, 00:16:24.946 "supported_io_types": { 00:16:24.946 "read": true, 00:16:24.946 "write": true, 00:16:24.946 "unmap": true, 00:16:24.946 "flush": true, 00:16:24.946 "reset": true, 00:16:24.946 "nvme_admin": true, 00:16:24.946 "nvme_io": true, 00:16:24.946 "nvme_io_md": false, 00:16:24.946 "write_zeroes": true, 00:16:24.946 "zcopy": false, 00:16:24.946 "get_zone_info": false, 00:16:24.946 "zone_management": false, 00:16:24.946 "zone_append": false, 00:16:24.946 "compare": true, 00:16:24.946 "compare_and_write": false, 00:16:24.946 "abort": true, 00:16:24.946 "seek_hole": false, 00:16:24.946 "seek_data": false, 00:16:24.946 "copy": true, 00:16:24.946 "nvme_iov_md": false 00:16:24.946 }, 00:16:24.946 "driver_specific": { 00:16:24.946 "nvme": [ 00:16:24.946 { 00:16:24.946 "pci_address": "0000:00:11.0", 00:16:24.946 "trid": { 00:16:24.946 "trtype": "PCIe", 00:16:24.946 "traddr": "0000:00:11.0" 00:16:24.946 }, 00:16:24.946 "ctrlr_data": { 00:16:24.946 "cntlid": 0, 00:16:24.946 "vendor_id": "0x1b36", 00:16:24.946 "model_number": "QEMU NVMe Ctrl", 00:16:24.946 "serial_number": "12341", 00:16:24.946 "firmware_revision": "8.0.0", 00:16:24.946 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:24.946 "oacs": { 00:16:24.946 "security": 0, 00:16:24.946 "format": 1, 00:16:24.946 "firmware": 0, 00:16:24.946 "ns_manage": 1 00:16:24.946 }, 00:16:24.946 "multi_ctrlr": false, 00:16:24.946 "ana_reporting": false 00:16:24.946 }, 00:16:24.946 "vs": { 00:16:24.946 "nvme_version": "1.4" 00:16:24.946 }, 00:16:24.946 "ns_data": { 00:16:24.946 "id": 1, 00:16:24.946 "can_share": false 00:16:24.946 } 00:16:24.946 } 00:16:24.946 ], 00:16:24.946 "mp_policy": "active_passive" 00:16:24.946 } 00:16:24.946 } 00:16:24.946 ]' 00:16:24.946 14:24:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:24.946 14:24:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:24.946 14:24:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:24.947 14:24:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:24.947 14:24:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:24.947 14:24:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:24.947 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:24.947 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:24.947 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:24.947 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:24.947 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:25.207 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=49019642-67fe-49ca-81df-c352f5ac4547 00:16:25.207 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:25.207 14:24:06 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 49019642-67fe-49ca-81df-c352f5ac4547 00:16:25.465 14:24:07 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:25.465 14:24:07 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=fd516d03-7d60-4e9a-9c5e-617936f8faf0 00:16:25.465 14:24:07 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fd516d03-7d60-4e9a-9c5e-617936f8faf0 00:16:25.724 14:24:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=e14e398a-3e36-4636-b50c-7fdf5f755ffd 00:16:25.724 14:24:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e14e398a-3e36-4636-b50c-7fdf5f755ffd 00:16:25.724 14:24:07 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:25.724 14:24:07 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:25.724 14:24:07 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=e14e398a-3e36-4636-b50c-7fdf5f755ffd 00:16:25.724 14:24:07 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:25.724 14:24:07 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size e14e398a-3e36-4636-b50c-7fdf5f755ffd 00:16:25.724 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=e14e398a-3e36-4636-b50c-7fdf5f755ffd 00:16:25.724 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:25.724 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:25.724 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:25.724 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e14e398a-3e36-4636-b50c-7fdf5f755ffd 00:16:25.985 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:25.985 { 00:16:25.985 "name": "e14e398a-3e36-4636-b50c-7fdf5f755ffd", 00:16:25.985 "aliases": [ 00:16:25.985 "lvs/nvme0n1p0" 00:16:25.985 ], 00:16:25.985 "product_name": "Logical Volume", 00:16:25.985 "block_size": 4096, 00:16:25.985 "num_blocks": 26476544, 00:16:25.985 "uuid": "e14e398a-3e36-4636-b50c-7fdf5f755ffd", 00:16:25.985 "assigned_rate_limits": { 00:16:25.985 "rw_ios_per_sec": 0, 00:16:25.985 "rw_mbytes_per_sec": 0, 00:16:25.985 "r_mbytes_per_sec": 0, 00:16:25.985 "w_mbytes_per_sec": 0 00:16:25.985 }, 00:16:25.985 "claimed": false, 00:16:25.985 "zoned": false, 00:16:25.985 "supported_io_types": { 00:16:25.985 "read": true, 00:16:25.985 "write": true, 00:16:25.985 "unmap": true, 00:16:25.985 "flush": false, 00:16:25.985 "reset": true, 00:16:25.985 "nvme_admin": false, 00:16:25.985 "nvme_io": false, 00:16:25.985 "nvme_io_md": false, 00:16:25.985 "write_zeroes": true, 00:16:25.985 "zcopy": false, 00:16:25.985 "get_zone_info": false, 00:16:25.985 "zone_management": false, 00:16:25.985 "zone_append": false, 00:16:25.985 "compare": false, 00:16:25.985 "compare_and_write": false, 00:16:25.985 "abort": false, 00:16:25.985 "seek_hole": true, 00:16:25.985 "seek_data": true, 00:16:25.985 "copy": false, 00:16:25.985 "nvme_iov_md": false 00:16:25.985 }, 00:16:25.985 "driver_specific": { 00:16:25.985 "lvol": { 00:16:25.985 "lvol_store_uuid": "fd516d03-7d60-4e9a-9c5e-617936f8faf0", 00:16:25.985 "base_bdev": "nvme0n1", 00:16:25.985 "thin_provision": true, 00:16:25.985 "num_allocated_clusters": 0, 00:16:25.985 "snapshot": false, 00:16:25.985 "clone": false, 00:16:25.985 "esnap_clone": false 00:16:25.985 } 00:16:25.985 } 00:16:25.986 } 00:16:25.986 ]' 00:16:25.986 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:25.986 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:25.986 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:25.986 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:25.986 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:25.986 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:25.986 14:24:07 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:25.986 14:24:07 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:25.986 14:24:07 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:26.245 14:24:07 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:26.245 14:24:07 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:26.245 14:24:07 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size e14e398a-3e36-4636-b50c-7fdf5f755ffd 00:16:26.245 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=e14e398a-3e36-4636-b50c-7fdf5f755ffd 00:16:26.245 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:26.245 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:26.245 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:26.245 14:24:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e14e398a-3e36-4636-b50c-7fdf5f755ffd 00:16:26.503 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:26.503 { 00:16:26.503 "name": "e14e398a-3e36-4636-b50c-7fdf5f755ffd", 00:16:26.503 "aliases": [ 00:16:26.503 "lvs/nvme0n1p0" 00:16:26.503 ], 00:16:26.503 "product_name": "Logical Volume", 00:16:26.503 "block_size": 4096, 00:16:26.503 "num_blocks": 26476544, 00:16:26.503 "uuid": "e14e398a-3e36-4636-b50c-7fdf5f755ffd", 00:16:26.503 "assigned_rate_limits": { 00:16:26.503 "rw_ios_per_sec": 0, 00:16:26.503 "rw_mbytes_per_sec": 0, 00:16:26.503 "r_mbytes_per_sec": 0, 00:16:26.503 "w_mbytes_per_sec": 0 00:16:26.503 }, 00:16:26.503 "claimed": false, 00:16:26.503 "zoned": false, 00:16:26.503 "supported_io_types": { 00:16:26.503 "read": true, 00:16:26.503 "write": true, 00:16:26.503 "unmap": true, 00:16:26.503 "flush": false, 00:16:26.503 "reset": true, 00:16:26.503 "nvme_admin": false, 00:16:26.503 "nvme_io": false, 00:16:26.503 "nvme_io_md": false, 00:16:26.503 "write_zeroes": true, 00:16:26.503 "zcopy": false, 00:16:26.503 "get_zone_info": false, 00:16:26.503 "zone_management": false, 00:16:26.503 "zone_append": false, 00:16:26.503 "compare": false, 00:16:26.503 "compare_and_write": false, 00:16:26.503 "abort": false, 00:16:26.503 "seek_hole": true, 00:16:26.503 "seek_data": true, 00:16:26.503 "copy": false, 00:16:26.503 "nvme_iov_md": false 00:16:26.503 }, 00:16:26.503 "driver_specific": { 00:16:26.503 "lvol": { 00:16:26.503 "lvol_store_uuid": "fd516d03-7d60-4e9a-9c5e-617936f8faf0", 00:16:26.503 "base_bdev": "nvme0n1", 00:16:26.503 "thin_provision": true, 00:16:26.503 "num_allocated_clusters": 0, 00:16:26.503 "snapshot": false, 00:16:26.503 "clone": false, 00:16:26.503 "esnap_clone": false 00:16:26.503 } 00:16:26.503 } 00:16:26.503 } 00:16:26.503 ]' 00:16:26.503 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:26.503 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:26.503 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:26.503 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:26.503 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:26.503 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:26.503 14:24:08 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:26.503 14:24:08 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:26.761 14:24:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:26.761 14:24:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size e14e398a-3e36-4636-b50c-7fdf5f755ffd 00:16:26.761 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=e14e398a-3e36-4636-b50c-7fdf5f755ffd 00:16:26.761 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:26.761 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:26.761 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:26.761 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e14e398a-3e36-4636-b50c-7fdf5f755ffd 00:16:27.019 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:27.019 { 00:16:27.019 "name": "e14e398a-3e36-4636-b50c-7fdf5f755ffd", 00:16:27.019 "aliases": [ 00:16:27.019 "lvs/nvme0n1p0" 00:16:27.019 ], 00:16:27.019 "product_name": "Logical Volume", 00:16:27.019 "block_size": 4096, 00:16:27.019 "num_blocks": 26476544, 00:16:27.019 "uuid": "e14e398a-3e36-4636-b50c-7fdf5f755ffd", 00:16:27.019 "assigned_rate_limits": { 00:16:27.019 "rw_ios_per_sec": 0, 00:16:27.019 "rw_mbytes_per_sec": 0, 00:16:27.019 "r_mbytes_per_sec": 0, 00:16:27.019 "w_mbytes_per_sec": 0 00:16:27.019 }, 00:16:27.019 "claimed": false, 00:16:27.019 "zoned": false, 00:16:27.019 "supported_io_types": { 00:16:27.019 "read": true, 00:16:27.019 "write": true, 00:16:27.019 "unmap": true, 00:16:27.019 "flush": false, 00:16:27.019 "reset": true, 00:16:27.019 "nvme_admin": false, 00:16:27.019 "nvme_io": false, 00:16:27.019 "nvme_io_md": false, 00:16:27.019 "write_zeroes": true, 00:16:27.019 "zcopy": false, 00:16:27.019 "get_zone_info": false, 00:16:27.019 "zone_management": false, 00:16:27.019 "zone_append": false, 00:16:27.019 "compare": false, 00:16:27.019 "compare_and_write": false, 00:16:27.019 "abort": false, 00:16:27.019 "seek_hole": true, 00:16:27.019 "seek_data": true, 00:16:27.019 "copy": false, 00:16:27.019 "nvme_iov_md": false 00:16:27.019 }, 00:16:27.019 "driver_specific": { 00:16:27.019 "lvol": { 00:16:27.019 "lvol_store_uuid": "fd516d03-7d60-4e9a-9c5e-617936f8faf0", 00:16:27.019 "base_bdev": "nvme0n1", 00:16:27.019 "thin_provision": true, 00:16:27.019 "num_allocated_clusters": 0, 00:16:27.019 "snapshot": false, 00:16:27.019 "clone": false, 00:16:27.019 "esnap_clone": false 00:16:27.019 } 00:16:27.019 } 00:16:27.019 } 00:16:27.019 ]' 00:16:27.019 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:27.019 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:27.019 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:27.019 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:27.019 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:27.019 14:24:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:27.019 14:24:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:27.019 14:24:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e14e398a-3e36-4636-b50c-7fdf5f755ffd -c nvc0n1p0 --l2p_dram_limit 20 00:16:27.278 [2024-11-29 14:24:08.846305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.278 [2024-11-29 14:24:08.846351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:27.278 [2024-11-29 14:24:08.846367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:27.278 [2024-11-29 14:24:08.846374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.278 [2024-11-29 14:24:08.846423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.278 [2024-11-29 14:24:08.846431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:27.278 [2024-11-29 14:24:08.846441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:27.278 [2024-11-29 14:24:08.846448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.278 [2024-11-29 14:24:08.846463] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:27.278 [2024-11-29 14:24:08.846709] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:27.278 [2024-11-29 14:24:08.846723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.278 [2024-11-29 14:24:08.846730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:27.278 [2024-11-29 14:24:08.846739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:16:27.278 [2024-11-29 14:24:08.846745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.278 [2024-11-29 14:24:08.846777] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 98221dd8-0fef-47f8-9c02-90fd5168af2f 00:16:27.278 [2024-11-29 14:24:08.848092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.278 [2024-11-29 14:24:08.848125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:27.278 [2024-11-29 14:24:08.848134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:27.278 [2024-11-29 14:24:08.848142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.278 [2024-11-29 14:24:08.855089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.278 [2024-11-29 14:24:08.855119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:27.278 [2024-11-29 14:24:08.855127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.910 ms 00:16:27.278 [2024-11-29 14:24:08.855136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.278 [2024-11-29 14:24:08.855234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.278 [2024-11-29 14:24:08.855247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:27.278 [2024-11-29 14:24:08.855254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:27.278 [2024-11-29 14:24:08.855262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.278 [2024-11-29 14:24:08.855300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.278 [2024-11-29 14:24:08.855312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:27.278 [2024-11-29 14:24:08.855318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:27.278 [2024-11-29 14:24:08.855327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.278 [2024-11-29 14:24:08.855346] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:27.278 [2024-11-29 14:24:08.857016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.278 [2024-11-29 14:24:08.857044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:27.278 [2024-11-29 14:24:08.857053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.674 ms 00:16:27.278 [2024-11-29 14:24:08.857059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.278 [2024-11-29 14:24:08.857088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.278 [2024-11-29 14:24:08.857095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:27.278 [2024-11-29 14:24:08.857105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:27.279 [2024-11-29 14:24:08.857111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.279 [2024-11-29 14:24:08.857131] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:27.279 [2024-11-29 14:24:08.857251] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:27.279 [2024-11-29 14:24:08.857262] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:27.279 [2024-11-29 14:24:08.857274] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:27.279 [2024-11-29 14:24:08.857284] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:27.279 [2024-11-29 14:24:08.857291] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:27.279 [2024-11-29 14:24:08.857301] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:27.279 [2024-11-29 14:24:08.857307] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:27.279 [2024-11-29 14:24:08.857314] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:27.279 [2024-11-29 14:24:08.857320] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:27.279 [2024-11-29 14:24:08.857327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.279 [2024-11-29 14:24:08.857333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:27.279 [2024-11-29 14:24:08.857342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:16:27.279 [2024-11-29 14:24:08.857348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.279 [2024-11-29 14:24:08.857414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.279 [2024-11-29 14:24:08.857421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:27.279 [2024-11-29 14:24:08.857429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:27.279 [2024-11-29 14:24:08.857434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.279 [2024-11-29 14:24:08.857519] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:27.279 [2024-11-29 14:24:08.857533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:27.279 [2024-11-29 14:24:08.857541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:27.279 [2024-11-29 14:24:08.857549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:27.279 [2024-11-29 14:24:08.857556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:27.279 [2024-11-29 14:24:08.857561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:27.279 [2024-11-29 14:24:08.857568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:27.279 [2024-11-29 14:24:08.857574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:27.279 [2024-11-29 14:24:08.857581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:27.279 [2024-11-29 14:24:08.857586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:27.279 [2024-11-29 14:24:08.857611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:27.279 [2024-11-29 14:24:08.857618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:27.279 [2024-11-29 14:24:08.857626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:27.279 [2024-11-29 14:24:08.857632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:27.279 [2024-11-29 14:24:08.857638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:27.279 [2024-11-29 14:24:08.857644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:27.279 [2024-11-29 14:24:08.857651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:27.279 [2024-11-29 14:24:08.857656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:27.279 [2024-11-29 14:24:08.857663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:27.279 [2024-11-29 14:24:08.857669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:27.279 [2024-11-29 14:24:08.857677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:27.279 [2024-11-29 14:24:08.857685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:27.279 [2024-11-29 14:24:08.857693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:27.279 [2024-11-29 14:24:08.857699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:27.279 [2024-11-29 14:24:08.857706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:27.279 [2024-11-29 14:24:08.857712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:27.279 [2024-11-29 14:24:08.857719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:27.279 [2024-11-29 14:24:08.857725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:27.279 [2024-11-29 14:24:08.857735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:27.279 [2024-11-29 14:24:08.857741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:27.279 [2024-11-29 14:24:08.857748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:27.279 [2024-11-29 14:24:08.857754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:27.279 [2024-11-29 14:24:08.857762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:27.279 [2024-11-29 14:24:08.857767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:27.279 [2024-11-29 14:24:08.857774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:27.279 [2024-11-29 14:24:08.857780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:27.279 [2024-11-29 14:24:08.857788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:27.279 [2024-11-29 14:24:08.857794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:27.279 [2024-11-29 14:24:08.857801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:27.279 [2024-11-29 14:24:08.857807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:27.279 [2024-11-29 14:24:08.857814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:27.279 [2024-11-29 14:24:08.857819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:27.279 [2024-11-29 14:24:08.857827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:27.279 [2024-11-29 14:24:08.857832] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:27.279 [2024-11-29 14:24:08.857843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:27.279 [2024-11-29 14:24:08.857849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:27.279 [2024-11-29 14:24:08.857856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:27.279 [2024-11-29 14:24:08.857863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:27.279 [2024-11-29 14:24:08.857870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:27.279 [2024-11-29 14:24:08.857876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:27.279 [2024-11-29 14:24:08.857883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:27.279 [2024-11-29 14:24:08.857889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:27.279 [2024-11-29 14:24:08.857896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:27.279 [2024-11-29 14:24:08.857907] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:27.279 [2024-11-29 14:24:08.857917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:27.279 [2024-11-29 14:24:08.857927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:27.279 [2024-11-29 14:24:08.857937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:27.279 [2024-11-29 14:24:08.857943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:27.279 [2024-11-29 14:24:08.857951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:27.280 [2024-11-29 14:24:08.857957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:27.280 [2024-11-29 14:24:08.857968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:27.280 [2024-11-29 14:24:08.857975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:27.280 [2024-11-29 14:24:08.857982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:27.280 [2024-11-29 14:24:08.857989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:27.280 [2024-11-29 14:24:08.857997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:27.280 [2024-11-29 14:24:08.858003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:27.280 [2024-11-29 14:24:08.858011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:27.280 [2024-11-29 14:24:08.858016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:27.280 [2024-11-29 14:24:08.858024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:27.280 [2024-11-29 14:24:08.858030] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:27.280 [2024-11-29 14:24:08.858039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:27.280 [2024-11-29 14:24:08.858045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:27.280 [2024-11-29 14:24:08.858053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:27.280 [2024-11-29 14:24:08.858058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:27.280 [2024-11-29 14:24:08.858064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:27.280 [2024-11-29 14:24:08.858070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.280 [2024-11-29 14:24:08.858081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:27.280 [2024-11-29 14:24:08.858089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:16:27.280 [2024-11-29 14:24:08.858096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.280 [2024-11-29 14:24:08.858120] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:27.280 [2024-11-29 14:24:08.858145] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:31.467 [2024-11-29 14:24:12.582860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.583072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:31.467 [2024-11-29 14:24:12.583129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3724.724 ms 00:16:31.467 [2024-11-29 14:24:12.583152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.467 [2024-11-29 14:24:12.604438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.604677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:31.467 [2024-11-29 14:24:12.604932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.179 ms 00:16:31.467 [2024-11-29 14:24:12.604975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.467 [2024-11-29 14:24:12.605131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.605332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:31.467 [2024-11-29 14:24:12.605369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:16:31.467 [2024-11-29 14:24:12.605407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.467 [2024-11-29 14:24:12.615780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.615912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:31.467 [2024-11-29 14:24:12.615969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.273 ms 00:16:31.467 [2024-11-29 14:24:12.615995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.467 [2024-11-29 14:24:12.616033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.616061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:31.467 [2024-11-29 14:24:12.616081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:31.467 [2024-11-29 14:24:12.616102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.467 [2024-11-29 14:24:12.616571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.616677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:31.467 [2024-11-29 14:24:12.616732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:16:31.467 [2024-11-29 14:24:12.616760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.467 [2024-11-29 14:24:12.616881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.616946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:31.467 [2024-11-29 14:24:12.616970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:16:31.467 [2024-11-29 14:24:12.616991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.467 [2024-11-29 14:24:12.622780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.622914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:31.467 [2024-11-29 14:24:12.622980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.718 ms 00:16:31.467 [2024-11-29 14:24:12.623006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.467 [2024-11-29 14:24:12.630468] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:31.467 [2024-11-29 14:24:12.636003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.636092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:31.467 [2024-11-29 14:24:12.636271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.926 ms 00:16:31.467 [2024-11-29 14:24:12.636290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.467 [2024-11-29 14:24:12.709842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.709952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:31.467 [2024-11-29 14:24:12.709999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.520 ms 00:16:31.467 [2024-11-29 14:24:12.710018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.467 [2024-11-29 14:24:12.710177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.710199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:31.467 [2024-11-29 14:24:12.710217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:16:31.467 [2024-11-29 14:24:12.710232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.467 [2024-11-29 14:24:12.713617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.713708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:31.467 [2024-11-29 14:24:12.713752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.359 ms 00:16:31.467 [2024-11-29 14:24:12.713770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.467 [2024-11-29 14:24:12.717053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.717141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:31.467 [2024-11-29 14:24:12.717194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.208 ms 00:16:31.467 [2024-11-29 14:24:12.717210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.467 [2024-11-29 14:24:12.717748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.717822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:31.467 [2024-11-29 14:24:12.718098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:16:31.467 [2024-11-29 14:24:12.718150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.467 [2024-11-29 14:24:12.750881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.467 [2024-11-29 14:24:12.750979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:31.468 [2024-11-29 14:24:12.751022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.702 ms 00:16:31.468 [2024-11-29 14:24:12.751041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.468 [2024-11-29 14:24:12.755883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.468 [2024-11-29 14:24:12.755975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:31.468 [2024-11-29 14:24:12.756022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.796 ms 00:16:31.468 [2024-11-29 14:24:12.756040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.468 [2024-11-29 14:24:12.759605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.468 [2024-11-29 14:24:12.759693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:31.468 [2024-11-29 14:24:12.759735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.473 ms 00:16:31.468 [2024-11-29 14:24:12.759752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.468 [2024-11-29 14:24:12.763357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.468 [2024-11-29 14:24:12.763446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:31.468 [2024-11-29 14:24:12.763510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.572 ms 00:16:31.468 [2024-11-29 14:24:12.763529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.468 [2024-11-29 14:24:12.763604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.468 [2024-11-29 14:24:12.763623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:31.468 [2024-11-29 14:24:12.763643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:31.468 [2024-11-29 14:24:12.763661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.468 [2024-11-29 14:24:12.763726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.468 [2024-11-29 14:24:12.763933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:31.468 [2024-11-29 14:24:12.763945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:31.468 [2024-11-29 14:24:12.763951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.468 [2024-11-29 14:24:12.764779] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3918.100 ms, result 0 00:16:31.468 { 00:16:31.468 "name": "ftl0", 00:16:31.468 "uuid": "98221dd8-0fef-47f8-9c02-90fd5168af2f" 00:16:31.468 } 00:16:31.468 14:24:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:31.468 14:24:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:31.468 14:24:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:31.468 14:24:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:31.468 [2024-11-29 14:24:13.069933] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:31.468 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:31.468 Zero copy mechanism will not be used. 00:16:31.468 Running I/O for 4 seconds... 00:16:33.334 784.00 IOPS, 52.06 MiB/s [2024-11-29T14:24:16.508Z] 889.00 IOPS, 59.04 MiB/s [2024-11-29T14:24:17.078Z] 871.00 IOPS, 57.84 MiB/s [2024-11-29T14:24:17.339Z] 829.00 IOPS, 55.05 MiB/s 00:16:35.546 Latency(us) 00:16:35.546 [2024-11-29T14:24:17.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.546 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:35.546 ftl0 : 4.00 828.70 55.03 0.00 0.00 1282.63 274.12 3062.55 00:16:35.546 [2024-11-29T14:24:17.340Z] =================================================================================================================== 00:16:35.546 [2024-11-29T14:24:17.340Z] Total : 828.70 55.03 0.00 0.00 1282.63 274.12 3062.55 00:16:35.546 [2024-11-29 14:24:17.078343] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:35.546 { 00:16:35.546 "results": [ 00:16:35.546 { 00:16:35.546 "job": "ftl0", 00:16:35.546 "core_mask": "0x1", 00:16:35.546 "workload": "randwrite", 00:16:35.546 "status": "finished", 00:16:35.546 "queue_depth": 1, 00:16:35.546 "io_size": 69632, 00:16:35.546 "runtime": 4.002665, 00:16:35.546 "iops": 828.6978800374251, 00:16:35.546 "mibps": 55.03071859623526, 00:16:35.546 "io_failed": 0, 00:16:35.546 "io_timeout": 0, 00:16:35.546 "avg_latency_us": 1282.6316569652838, 00:16:35.546 "min_latency_us": 274.11692307692306, 00:16:35.546 "max_latency_us": 3062.547692307692 00:16:35.546 } 00:16:35.546 ], 00:16:35.546 "core_count": 1 00:16:35.546 } 00:16:35.546 14:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:35.546 [2024-11-29 14:24:17.190292] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:35.546 Running I/O for 4 seconds... 00:16:37.431 6649.00 IOPS, 25.97 MiB/s [2024-11-29T14:24:20.609Z] 5927.00 IOPS, 23.15 MiB/s [2024-11-29T14:24:21.551Z] 5755.00 IOPS, 22.48 MiB/s [2024-11-29T14:24:21.551Z] 5757.50 IOPS, 22.49 MiB/s 00:16:39.757 Latency(us) 00:16:39.757 [2024-11-29T14:24:21.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.757 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.757 ftl0 : 4.03 5746.48 22.45 0.00 0.00 22201.73 349.74 44564.48 00:16:39.757 [2024-11-29T14:24:21.551Z] =================================================================================================================== 00:16:39.757 [2024-11-29T14:24:21.551Z] Total : 5746.48 22.45 0.00 0.00 22201.73 0.00 44564.48 00:16:39.757 [2024-11-29 14:24:21.228026] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:39.757 { 00:16:39.757 "results": [ 00:16:39.757 { 00:16:39.757 "job": "ftl0", 00:16:39.757 "core_mask": "0x1", 00:16:39.757 "workload": "randwrite", 00:16:39.757 "status": "finished", 00:16:39.757 "queue_depth": 128, 00:16:39.757 "io_size": 4096, 00:16:39.757 "runtime": 4.029943, 00:16:39.757 "iops": 5746.483262914637, 00:16:39.757 "mibps": 22.4472002457603, 00:16:39.757 "io_failed": 0, 00:16:39.757 "io_timeout": 0, 00:16:39.757 "avg_latency_us": 22201.73483933115, 00:16:39.757 "min_latency_us": 349.7353846153846, 00:16:39.757 "max_latency_us": 44564.48 00:16:39.757 } 00:16:39.757 ], 00:16:39.757 "core_count": 1 00:16:39.757 } 00:16:39.757 14:24:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:39.757 [2024-11-29 14:24:21.334793] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:39.757 Running I/O for 4 seconds... 00:16:41.646 4615.00 IOPS, 18.03 MiB/s [2024-11-29T14:24:24.381Z] 4545.50 IOPS, 17.76 MiB/s [2024-11-29T14:24:25.766Z] 4526.67 IOPS, 17.68 MiB/s [2024-11-29T14:24:25.766Z] 4489.00 IOPS, 17.54 MiB/s 00:16:43.972 Latency(us) 00:16:43.972 [2024-11-29T14:24:25.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.972 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:43.972 Verification LBA range: start 0x0 length 0x1400000 00:16:43.972 ftl0 : 4.02 4501.49 17.58 0.00 0.00 28347.47 359.19 50412.31 00:16:43.972 [2024-11-29T14:24:25.766Z] =================================================================================================================== 00:16:43.972 [2024-11-29T14:24:25.766Z] Total : 4501.49 17.58 0.00 0.00 28347.47 0.00 50412.31 00:16:43.972 [2024-11-29 14:24:25.360240] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:43.972 { 00:16:43.972 "results": [ 00:16:43.972 { 00:16:43.972 "job": "ftl0", 00:16:43.972 "core_mask": "0x1", 00:16:43.972 "workload": "verify", 00:16:43.972 "status": "finished", 00:16:43.972 "verify_range": { 00:16:43.972 "start": 0, 00:16:43.972 "length": 20971520 00:16:43.972 }, 00:16:43.972 "queue_depth": 128, 00:16:43.972 "io_size": 4096, 00:16:43.972 "runtime": 4.017339, 00:16:43.972 "iops": 4501.4871784532, 00:16:43.972 "mibps": 17.58393429083281, 00:16:43.972 "io_failed": 0, 00:16:43.972 "io_timeout": 0, 00:16:43.972 "avg_latency_us": 28347.47238085515, 00:16:43.972 "min_latency_us": 359.1876923076923, 00:16:43.972 "max_latency_us": 50412.307692307695 00:16:43.972 } 00:16:43.972 ], 00:16:43.972 "core_count": 1 00:16:43.972 } 00:16:43.972 14:24:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:43.972 [2024-11-29 14:24:25.628652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.972 [2024-11-29 14:24:25.628718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:43.972 [2024-11-29 14:24:25.628749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:43.972 [2024-11-29 14:24:25.628758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.972 [2024-11-29 14:24:25.628783] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:43.972 [2024-11-29 14:24:25.629555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.973 [2024-11-29 14:24:25.629662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:43.973 [2024-11-29 14:24:25.629677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:16:43.973 [2024-11-29 14:24:25.629691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.973 [2024-11-29 14:24:25.632851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.973 [2024-11-29 14:24:25.633051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:43.973 [2024-11-29 14:24:25.633072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.129 ms 00:16:43.973 [2024-11-29 14:24:25.633087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.234 [2024-11-29 14:24:25.836709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.234 [2024-11-29 14:24:25.836861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:44.234 [2024-11-29 14:24:25.836881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 203.601 ms 00:16:44.234 [2024-11-29 14:24:25.836891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.234 [2024-11-29 14:24:25.843095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.234 [2024-11-29 14:24:25.843128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:44.234 [2024-11-29 14:24:25.843138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.173 ms 00:16:44.234 [2024-11-29 14:24:25.843147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.234 [2024-11-29 14:24:25.845427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.234 [2024-11-29 14:24:25.845464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:44.234 [2024-11-29 14:24:25.845474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.226 ms 00:16:44.234 [2024-11-29 14:24:25.845485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.234 [2024-11-29 14:24:25.850096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.234 [2024-11-29 14:24:25.850138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:44.234 [2024-11-29 14:24:25.850149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.571 ms 00:16:44.234 [2024-11-29 14:24:25.850162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.234 [2024-11-29 14:24:25.850269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.234 [2024-11-29 14:24:25.850281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:44.234 [2024-11-29 14:24:25.850289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:44.234 [2024-11-29 14:24:25.850298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.234 [2024-11-29 14:24:25.853079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.234 [2024-11-29 14:24:25.853116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:44.234 [2024-11-29 14:24:25.853125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.767 ms 00:16:44.235 [2024-11-29 14:24:25.853135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.235 [2024-11-29 14:24:25.855354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.235 [2024-11-29 14:24:25.855391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:44.235 [2024-11-29 14:24:25.855400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.190 ms 00:16:44.235 [2024-11-29 14:24:25.855408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.235 [2024-11-29 14:24:25.857238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.235 [2024-11-29 14:24:25.857275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:44.235 [2024-11-29 14:24:25.857283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.800 ms 00:16:44.235 [2024-11-29 14:24:25.857293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.235 [2024-11-29 14:24:25.859065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.235 [2024-11-29 14:24:25.859099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:44.235 [2024-11-29 14:24:25.859108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:16:44.235 [2024-11-29 14:24:25.859118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.235 [2024-11-29 14:24:25.859146] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:44.235 [2024-11-29 14:24:25.859161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:44.235 [2024-11-29 14:24:25.859887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.859895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.859903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.859910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.859919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.859927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.859936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.859943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.859952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.859959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.859968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.859975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.859986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.859993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.860002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.860009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.860017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.860025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.860034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.860042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.860050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.860057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:44.236 [2024-11-29 14:24:25.860075] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:44.236 [2024-11-29 14:24:25.860083] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 98221dd8-0fef-47f8-9c02-90fd5168af2f 00:16:44.236 [2024-11-29 14:24:25.860092] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:44.236 [2024-11-29 14:24:25.860103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:44.236 [2024-11-29 14:24:25.860111] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:44.236 [2024-11-29 14:24:25.860119] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:44.236 [2024-11-29 14:24:25.860129] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:44.236 [2024-11-29 14:24:25.860136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:44.236 [2024-11-29 14:24:25.860144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:44.236 [2024-11-29 14:24:25.860150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:44.236 [2024-11-29 14:24:25.860158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:44.236 [2024-11-29 14:24:25.860165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.236 [2024-11-29 14:24:25.860174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:44.236 [2024-11-29 14:24:25.860182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:16:44.236 [2024-11-29 14:24:25.860193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.861744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.236 [2024-11-29 14:24:25.861769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:44.236 [2024-11-29 14:24:25.861778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:16:44.236 [2024-11-29 14:24:25.861787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.861879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.236 [2024-11-29 14:24:25.861890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:44.236 [2024-11-29 14:24:25.861898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:44.236 [2024-11-29 14:24:25.861909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.866702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.236 [2024-11-29 14:24:25.866737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.236 [2024-11-29 14:24:25.866746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.236 [2024-11-29 14:24:25.866756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.866806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.236 [2024-11-29 14:24:25.866815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.236 [2024-11-29 14:24:25.866823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.236 [2024-11-29 14:24:25.866832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.866918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.236 [2024-11-29 14:24:25.866931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.236 [2024-11-29 14:24:25.866938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.236 [2024-11-29 14:24:25.866947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.866961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.236 [2024-11-29 14:24:25.866971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.236 [2024-11-29 14:24:25.866978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.236 [2024-11-29 14:24:25.866988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.876166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.236 [2024-11-29 14:24:25.876209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.236 [2024-11-29 14:24:25.876219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.236 [2024-11-29 14:24:25.876228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.884151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.236 [2024-11-29 14:24:25.884190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.236 [2024-11-29 14:24:25.884200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.236 [2024-11-29 14:24:25.884215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.884271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.236 [2024-11-29 14:24:25.884285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.236 [2024-11-29 14:24:25.884292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.236 [2024-11-29 14:24:25.884305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.884348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.236 [2024-11-29 14:24:25.884359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.236 [2024-11-29 14:24:25.884367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.236 [2024-11-29 14:24:25.884378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.884441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.236 [2024-11-29 14:24:25.884454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.236 [2024-11-29 14:24:25.884461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.236 [2024-11-29 14:24:25.884470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.884517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.236 [2024-11-29 14:24:25.884530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:44.236 [2024-11-29 14:24:25.884537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.236 [2024-11-29 14:24:25.884557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.884591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.236 [2024-11-29 14:24:25.884602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.236 [2024-11-29 14:24:25.884611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.236 [2024-11-29 14:24:25.884620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.884659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.236 [2024-11-29 14:24:25.884684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.236 [2024-11-29 14:24:25.884692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.236 [2024-11-29 14:24:25.884703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.236 [2024-11-29 14:24:25.884822] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 256.140 ms, result 0 00:16:44.236 true 00:16:44.236 14:24:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 85078 00:16:44.236 14:24:25 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 85078 ']' 00:16:44.236 14:24:25 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 85078 00:16:44.236 14:24:25 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:44.236 14:24:25 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:44.236 14:24:25 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85078 00:16:44.236 killing process with pid 85078 00:16:44.236 Received shutdown signal, test time was about 4.000000 seconds 00:16:44.236 00:16:44.236 Latency(us) 00:16:44.236 [2024-11-29T14:24:26.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.236 [2024-11-29T14:24:26.030Z] =================================================================================================================== 00:16:44.236 [2024-11-29T14:24:26.030Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:44.236 14:24:25 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:44.236 14:24:25 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:44.236 14:24:25 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85078' 00:16:44.237 14:24:25 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 85078 00:16:44.237 14:24:25 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 85078 00:16:47.540 Remove shared memory files 00:16:47.540 14:24:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:47.540 14:24:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:47.540 14:24:29 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:47.540 14:24:29 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:47.540 14:24:29 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:47.540 14:24:29 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:47.540 14:24:29 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:47.540 14:24:29 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:47.540 ************************************ 00:16:47.540 END TEST ftl_bdevperf 00:16:47.540 ************************************ 00:16:47.540 00:16:47.540 real 0m24.106s 00:16:47.540 user 0m26.544s 00:16:47.540 sys 0m1.007s 00:16:47.540 14:24:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:47.540 14:24:29 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:47.540 14:24:29 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:47.540 14:24:29 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:47.540 14:24:29 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:47.540 14:24:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:47.540 ************************************ 00:16:47.540 START TEST ftl_trim 00:16:47.540 ************************************ 00:16:47.540 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:47.540 * Looking for test storage... 00:16:47.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:47.540 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:47.540 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:47.540 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:47.540 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.540 14:24:29 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:47.540 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.540 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:47.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.540 --rc genhtml_branch_coverage=1 00:16:47.540 --rc genhtml_function_coverage=1 00:16:47.540 --rc genhtml_legend=1 00:16:47.540 --rc geninfo_all_blocks=1 00:16:47.540 --rc geninfo_unexecuted_blocks=1 00:16:47.540 00:16:47.540 ' 00:16:47.540 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:47.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.540 --rc genhtml_branch_coverage=1 00:16:47.540 --rc genhtml_function_coverage=1 00:16:47.540 --rc genhtml_legend=1 00:16:47.540 --rc geninfo_all_blocks=1 00:16:47.540 --rc geninfo_unexecuted_blocks=1 00:16:47.540 00:16:47.540 ' 00:16:47.540 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:47.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.540 --rc genhtml_branch_coverage=1 00:16:47.540 --rc genhtml_function_coverage=1 00:16:47.540 --rc genhtml_legend=1 00:16:47.540 --rc geninfo_all_blocks=1 00:16:47.540 --rc geninfo_unexecuted_blocks=1 00:16:47.540 00:16:47.540 ' 00:16:47.540 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:47.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.540 --rc genhtml_branch_coverage=1 00:16:47.540 --rc genhtml_function_coverage=1 00:16:47.540 --rc genhtml_legend=1 00:16:47.540 --rc geninfo_all_blocks=1 00:16:47.540 --rc geninfo_unexecuted_blocks=1 00:16:47.540 00:16:47.540 ' 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:47.540 14:24:29 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:47.541 14:24:29 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:47.541 14:24:29 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.541 14:24:29 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:47.541 14:24:29 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:47.541 14:24:29 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:47.541 14:24:29 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:47.541 14:24:29 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:47.541 14:24:29 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:47.541 14:24:29 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:47.541 14:24:29 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:47.541 14:24:29 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:47.541 14:24:29 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:47.541 14:24:29 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:47.541 14:24:29 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=85424 00:16:47.802 14:24:29 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 85424 00:16:47.802 14:24:29 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:47.802 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85424 ']' 00:16:47.802 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.802 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.802 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.802 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.802 14:24:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:47.802 [2024-11-29 14:24:29.417909] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:47.802 [2024-11-29 14:24:29.418317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85424 ] 00:16:47.802 [2024-11-29 14:24:29.569741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:48.062 [2024-11-29 14:24:29.620551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.062 [2024-11-29 14:24:29.620690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.062 [2024-11-29 14:24:29.620778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.633 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.633 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:48.633 14:24:30 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:48.633 14:24:30 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:48.633 14:24:30 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:48.633 14:24:30 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:48.633 14:24:30 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:48.633 14:24:30 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:48.893 14:24:30 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:48.893 14:24:30 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:48.893 14:24:30 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:48.893 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:48.893 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:48.893 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:48.893 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:48.893 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:49.158 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:49.158 { 00:16:49.158 "name": "nvme0n1", 00:16:49.158 "aliases": [ 00:16:49.158 "63dd4ef4-a0c5-4b4f-9252-362d37005fbe" 00:16:49.158 ], 00:16:49.158 "product_name": "NVMe disk", 00:16:49.158 "block_size": 4096, 00:16:49.158 "num_blocks": 1310720, 00:16:49.158 "uuid": "63dd4ef4-a0c5-4b4f-9252-362d37005fbe", 00:16:49.158 "numa_id": -1, 00:16:49.158 "assigned_rate_limits": { 00:16:49.158 "rw_ios_per_sec": 0, 00:16:49.158 "rw_mbytes_per_sec": 0, 00:16:49.158 "r_mbytes_per_sec": 0, 00:16:49.158 "w_mbytes_per_sec": 0 00:16:49.158 }, 00:16:49.158 "claimed": true, 00:16:49.158 "claim_type": "read_many_write_one", 00:16:49.158 "zoned": false, 00:16:49.158 "supported_io_types": { 00:16:49.158 "read": true, 00:16:49.158 "write": true, 00:16:49.158 "unmap": true, 00:16:49.158 "flush": true, 00:16:49.158 "reset": true, 00:16:49.158 "nvme_admin": true, 00:16:49.158 "nvme_io": true, 00:16:49.158 "nvme_io_md": false, 00:16:49.158 "write_zeroes": true, 00:16:49.158 "zcopy": false, 00:16:49.158 "get_zone_info": false, 00:16:49.158 "zone_management": false, 00:16:49.158 "zone_append": false, 00:16:49.158 "compare": true, 00:16:49.158 "compare_and_write": false, 00:16:49.158 "abort": true, 00:16:49.158 "seek_hole": false, 00:16:49.158 "seek_data": false, 00:16:49.158 "copy": true, 00:16:49.158 "nvme_iov_md": false 00:16:49.158 }, 00:16:49.158 "driver_specific": { 00:16:49.158 "nvme": [ 00:16:49.158 { 00:16:49.158 "pci_address": "0000:00:11.0", 00:16:49.158 "trid": { 00:16:49.158 "trtype": "PCIe", 00:16:49.158 "traddr": "0000:00:11.0" 00:16:49.158 }, 00:16:49.158 "ctrlr_data": { 00:16:49.158 "cntlid": 0, 00:16:49.158 "vendor_id": "0x1b36", 00:16:49.158 "model_number": "QEMU NVMe Ctrl", 00:16:49.159 "serial_number": "12341", 00:16:49.159 "firmware_revision": "8.0.0", 00:16:49.159 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:49.159 "oacs": { 00:16:49.159 "security": 0, 00:16:49.159 "format": 1, 00:16:49.159 "firmware": 0, 00:16:49.159 "ns_manage": 1 00:16:49.159 }, 00:16:49.159 "multi_ctrlr": false, 00:16:49.159 "ana_reporting": false 00:16:49.159 }, 00:16:49.159 "vs": { 00:16:49.159 "nvme_version": "1.4" 00:16:49.159 }, 00:16:49.159 "ns_data": { 00:16:49.159 "id": 1, 00:16:49.159 "can_share": false 00:16:49.159 } 00:16:49.159 } 00:16:49.159 ], 00:16:49.159 "mp_policy": "active_passive" 00:16:49.159 } 00:16:49.159 } 00:16:49.159 ]' 00:16:49.159 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:49.159 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:49.159 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:49.159 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:49.159 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:49.159 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:49.159 14:24:30 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:49.159 14:24:30 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:49.159 14:24:30 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:49.159 14:24:30 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:49.159 14:24:30 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:49.477 14:24:31 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=fd516d03-7d60-4e9a-9c5e-617936f8faf0 00:16:49.477 14:24:31 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:49.477 14:24:31 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd516d03-7d60-4e9a-9c5e-617936f8faf0 00:16:49.738 14:24:31 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:49.999 14:24:31 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=1edfd12d-7287-4c5e-b0f3-d91b7981eda7 00:16:49.999 14:24:31 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1edfd12d-7287-4c5e-b0f3-d91b7981eda7 00:16:49.999 14:24:31 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=3539bfd4-0558-4c1c-aeed-e1ad82095de3 00:16:49.999 14:24:31 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3539bfd4-0558-4c1c-aeed-e1ad82095de3 00:16:49.999 14:24:31 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:49.999 14:24:31 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:49.999 14:24:31 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=3539bfd4-0558-4c1c-aeed-e1ad82095de3 00:16:49.999 14:24:31 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:49.999 14:24:31 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 3539bfd4-0558-4c1c-aeed-e1ad82095de3 00:16:49.999 14:24:31 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=3539bfd4-0558-4c1c-aeed-e1ad82095de3 00:16:49.999 14:24:31 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:49.999 14:24:31 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:49.999 14:24:31 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:50.259 14:24:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3539bfd4-0558-4c1c-aeed-e1ad82095de3 00:16:50.259 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:50.259 { 00:16:50.259 "name": "3539bfd4-0558-4c1c-aeed-e1ad82095de3", 00:16:50.259 "aliases": [ 00:16:50.260 "lvs/nvme0n1p0" 00:16:50.260 ], 00:16:50.260 "product_name": "Logical Volume", 00:16:50.260 "block_size": 4096, 00:16:50.260 "num_blocks": 26476544, 00:16:50.260 "uuid": "3539bfd4-0558-4c1c-aeed-e1ad82095de3", 00:16:50.260 "assigned_rate_limits": { 00:16:50.260 "rw_ios_per_sec": 0, 00:16:50.260 "rw_mbytes_per_sec": 0, 00:16:50.260 "r_mbytes_per_sec": 0, 00:16:50.260 "w_mbytes_per_sec": 0 00:16:50.260 }, 00:16:50.260 "claimed": false, 00:16:50.260 "zoned": false, 00:16:50.260 "supported_io_types": { 00:16:50.260 "read": true, 00:16:50.260 "write": true, 00:16:50.260 "unmap": true, 00:16:50.260 "flush": false, 00:16:50.260 "reset": true, 00:16:50.260 "nvme_admin": false, 00:16:50.260 "nvme_io": false, 00:16:50.260 "nvme_io_md": false, 00:16:50.260 "write_zeroes": true, 00:16:50.260 "zcopy": false, 00:16:50.260 "get_zone_info": false, 00:16:50.260 "zone_management": false, 00:16:50.260 "zone_append": false, 00:16:50.260 "compare": false, 00:16:50.260 "compare_and_write": false, 00:16:50.260 "abort": false, 00:16:50.260 "seek_hole": true, 00:16:50.260 "seek_data": true, 00:16:50.260 "copy": false, 00:16:50.260 "nvme_iov_md": false 00:16:50.260 }, 00:16:50.260 "driver_specific": { 00:16:50.260 "lvol": { 00:16:50.260 "lvol_store_uuid": "1edfd12d-7287-4c5e-b0f3-d91b7981eda7", 00:16:50.260 "base_bdev": "nvme0n1", 00:16:50.260 "thin_provision": true, 00:16:50.260 "num_allocated_clusters": 0, 00:16:50.260 "snapshot": false, 00:16:50.260 "clone": false, 00:16:50.260 "esnap_clone": false 00:16:50.260 } 00:16:50.260 } 00:16:50.260 } 00:16:50.260 ]' 00:16:50.260 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:50.260 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:50.260 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:50.520 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:50.520 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:50.520 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:50.520 14:24:32 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:50.520 14:24:32 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:50.520 14:24:32 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:50.782 14:24:32 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:50.782 14:24:32 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:50.782 14:24:32 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 3539bfd4-0558-4c1c-aeed-e1ad82095de3 00:16:50.782 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=3539bfd4-0558-4c1c-aeed-e1ad82095de3 00:16:50.782 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:50.782 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:50.782 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:50.782 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3539bfd4-0558-4c1c-aeed-e1ad82095de3 00:16:50.782 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:50.782 { 00:16:50.782 "name": "3539bfd4-0558-4c1c-aeed-e1ad82095de3", 00:16:50.782 "aliases": [ 00:16:50.782 "lvs/nvme0n1p0" 00:16:50.782 ], 00:16:50.782 "product_name": "Logical Volume", 00:16:50.782 "block_size": 4096, 00:16:50.782 "num_blocks": 26476544, 00:16:50.782 "uuid": "3539bfd4-0558-4c1c-aeed-e1ad82095de3", 00:16:50.782 "assigned_rate_limits": { 00:16:50.782 "rw_ios_per_sec": 0, 00:16:50.782 "rw_mbytes_per_sec": 0, 00:16:50.782 "r_mbytes_per_sec": 0, 00:16:50.782 "w_mbytes_per_sec": 0 00:16:50.782 }, 00:16:50.782 "claimed": false, 00:16:50.782 "zoned": false, 00:16:50.782 "supported_io_types": { 00:16:50.782 "read": true, 00:16:50.782 "write": true, 00:16:50.782 "unmap": true, 00:16:50.782 "flush": false, 00:16:50.782 "reset": true, 00:16:50.782 "nvme_admin": false, 00:16:50.782 "nvme_io": false, 00:16:50.782 "nvme_io_md": false, 00:16:50.782 "write_zeroes": true, 00:16:50.782 "zcopy": false, 00:16:50.782 "get_zone_info": false, 00:16:50.782 "zone_management": false, 00:16:50.782 "zone_append": false, 00:16:50.782 "compare": false, 00:16:50.782 "compare_and_write": false, 00:16:50.782 "abort": false, 00:16:50.782 "seek_hole": true, 00:16:50.782 "seek_data": true, 00:16:50.782 "copy": false, 00:16:50.782 "nvme_iov_md": false 00:16:50.782 }, 00:16:50.782 "driver_specific": { 00:16:50.782 "lvol": { 00:16:50.782 "lvol_store_uuid": "1edfd12d-7287-4c5e-b0f3-d91b7981eda7", 00:16:50.782 "base_bdev": "nvme0n1", 00:16:50.782 "thin_provision": true, 00:16:50.782 "num_allocated_clusters": 0, 00:16:50.782 "snapshot": false, 00:16:50.782 "clone": false, 00:16:50.782 "esnap_clone": false 00:16:50.782 } 00:16:50.782 } 00:16:50.782 } 00:16:50.782 ]' 00:16:50.782 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:50.782 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:50.782 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:51.043 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:51.043 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:51.043 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:51.043 14:24:32 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:51.043 14:24:32 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:51.043 14:24:32 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:51.043 14:24:32 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:51.043 14:24:32 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 3539bfd4-0558-4c1c-aeed-e1ad82095de3 00:16:51.044 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=3539bfd4-0558-4c1c-aeed-e1ad82095de3 00:16:51.044 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:51.044 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:51.044 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:51.044 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3539bfd4-0558-4c1c-aeed-e1ad82095de3 00:16:51.304 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:51.304 { 00:16:51.304 "name": "3539bfd4-0558-4c1c-aeed-e1ad82095de3", 00:16:51.304 "aliases": [ 00:16:51.304 "lvs/nvme0n1p0" 00:16:51.304 ], 00:16:51.304 "product_name": "Logical Volume", 00:16:51.304 "block_size": 4096, 00:16:51.304 "num_blocks": 26476544, 00:16:51.304 "uuid": "3539bfd4-0558-4c1c-aeed-e1ad82095de3", 00:16:51.304 "assigned_rate_limits": { 00:16:51.304 "rw_ios_per_sec": 0, 00:16:51.304 "rw_mbytes_per_sec": 0, 00:16:51.304 "r_mbytes_per_sec": 0, 00:16:51.304 "w_mbytes_per_sec": 0 00:16:51.304 }, 00:16:51.304 "claimed": false, 00:16:51.304 "zoned": false, 00:16:51.304 "supported_io_types": { 00:16:51.304 "read": true, 00:16:51.304 "write": true, 00:16:51.304 "unmap": true, 00:16:51.304 "flush": false, 00:16:51.304 "reset": true, 00:16:51.304 "nvme_admin": false, 00:16:51.304 "nvme_io": false, 00:16:51.304 "nvme_io_md": false, 00:16:51.304 "write_zeroes": true, 00:16:51.304 "zcopy": false, 00:16:51.304 "get_zone_info": false, 00:16:51.304 "zone_management": false, 00:16:51.304 "zone_append": false, 00:16:51.304 "compare": false, 00:16:51.304 "compare_and_write": false, 00:16:51.304 "abort": false, 00:16:51.304 "seek_hole": true, 00:16:51.304 "seek_data": true, 00:16:51.304 "copy": false, 00:16:51.304 "nvme_iov_md": false 00:16:51.304 }, 00:16:51.304 "driver_specific": { 00:16:51.304 "lvol": { 00:16:51.304 "lvol_store_uuid": "1edfd12d-7287-4c5e-b0f3-d91b7981eda7", 00:16:51.304 "base_bdev": "nvme0n1", 00:16:51.304 "thin_provision": true, 00:16:51.304 "num_allocated_clusters": 0, 00:16:51.304 "snapshot": false, 00:16:51.304 "clone": false, 00:16:51.304 "esnap_clone": false 00:16:51.304 } 00:16:51.304 } 00:16:51.304 } 00:16:51.304 ]' 00:16:51.304 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:51.304 14:24:33 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:51.304 14:24:33 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:51.304 14:24:33 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:51.304 14:24:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:51.304 14:24:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:51.304 14:24:33 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:51.304 14:24:33 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3539bfd4-0558-4c1c-aeed-e1ad82095de3 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:51.567 [2024-11-29 14:24:33.246465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.567 [2024-11-29 14:24:33.246516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:51.567 [2024-11-29 14:24:33.246528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:51.567 [2024-11-29 14:24:33.246553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.567 [2024-11-29 14:24:33.248430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.567 [2024-11-29 14:24:33.248460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:51.567 [2024-11-29 14:24:33.248468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.843 ms 00:16:51.567 [2024-11-29 14:24:33.248476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.567 [2024-11-29 14:24:33.248552] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:51.567 [2024-11-29 14:24:33.248803] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:51.567 [2024-11-29 14:24:33.248819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.567 [2024-11-29 14:24:33.248835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:51.567 [2024-11-29 14:24:33.248842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:16:51.567 [2024-11-29 14:24:33.248858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.567 [2024-11-29 14:24:33.248939] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID df3abf56-433e-4b26-bd7a-fcb295efe551 00:16:51.567 [2024-11-29 14:24:33.249980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.567 [2024-11-29 14:24:33.250082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:51.567 [2024-11-29 14:24:33.250097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:51.567 [2024-11-29 14:24:33.250103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.567 [2024-11-29 14:24:33.255435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.567 [2024-11-29 14:24:33.255459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:51.567 [2024-11-29 14:24:33.255468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.237 ms 00:16:51.567 [2024-11-29 14:24:33.255474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.567 [2024-11-29 14:24:33.255593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.567 [2024-11-29 14:24:33.255603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:51.567 [2024-11-29 14:24:33.255611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:51.567 [2024-11-29 14:24:33.255627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.567 [2024-11-29 14:24:33.255663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.567 [2024-11-29 14:24:33.255671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:51.567 [2024-11-29 14:24:33.255680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:51.567 [2024-11-29 14:24:33.255685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.567 [2024-11-29 14:24:33.255715] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:51.567 [2024-11-29 14:24:33.257026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.567 [2024-11-29 14:24:33.257130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:51.567 [2024-11-29 14:24:33.257142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.317 ms 00:16:51.567 [2024-11-29 14:24:33.257149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.567 [2024-11-29 14:24:33.257189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.567 [2024-11-29 14:24:33.257199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:51.567 [2024-11-29 14:24:33.257205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:51.567 [2024-11-29 14:24:33.257213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.567 [2024-11-29 14:24:33.257243] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:51.567 [2024-11-29 14:24:33.257357] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:51.567 [2024-11-29 14:24:33.257368] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:51.567 [2024-11-29 14:24:33.257378] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:51.567 [2024-11-29 14:24:33.257385] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:51.567 [2024-11-29 14:24:33.257393] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:51.567 [2024-11-29 14:24:33.257399] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:51.567 [2024-11-29 14:24:33.257416] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:51.567 [2024-11-29 14:24:33.257422] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:51.567 [2024-11-29 14:24:33.257428] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:51.567 [2024-11-29 14:24:33.257442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.567 [2024-11-29 14:24:33.257449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:51.567 [2024-11-29 14:24:33.257456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:16:51.567 [2024-11-29 14:24:33.257464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.567 [2024-11-29 14:24:33.257560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.567 [2024-11-29 14:24:33.257571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:51.567 [2024-11-29 14:24:33.257577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:51.567 [2024-11-29 14:24:33.257584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.567 [2024-11-29 14:24:33.257694] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:51.567 [2024-11-29 14:24:33.257711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:51.567 [2024-11-29 14:24:33.257718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:51.567 [2024-11-29 14:24:33.257727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.567 [2024-11-29 14:24:33.257735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:51.567 [2024-11-29 14:24:33.257742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:51.567 [2024-11-29 14:24:33.257748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:51.567 [2024-11-29 14:24:33.257756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:51.567 [2024-11-29 14:24:33.257762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:51.567 [2024-11-29 14:24:33.257771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:51.567 [2024-11-29 14:24:33.257777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:51.567 [2024-11-29 14:24:33.257785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:51.567 [2024-11-29 14:24:33.257791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:51.567 [2024-11-29 14:24:33.257800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:51.568 [2024-11-29 14:24:33.257806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:51.568 [2024-11-29 14:24:33.257814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.568 [2024-11-29 14:24:33.257820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:51.568 [2024-11-29 14:24:33.257828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:51.568 [2024-11-29 14:24:33.257833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.568 [2024-11-29 14:24:33.257842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:51.568 [2024-11-29 14:24:33.257848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:51.568 [2024-11-29 14:24:33.257855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.568 [2024-11-29 14:24:33.257861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:51.568 [2024-11-29 14:24:33.257868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:51.568 [2024-11-29 14:24:33.257874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.568 [2024-11-29 14:24:33.257881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:51.568 [2024-11-29 14:24:33.257887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:51.568 [2024-11-29 14:24:33.257893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.568 [2024-11-29 14:24:33.257900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:51.568 [2024-11-29 14:24:33.257909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:51.568 [2024-11-29 14:24:33.257915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.568 [2024-11-29 14:24:33.257922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:51.568 [2024-11-29 14:24:33.257928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:51.568 [2024-11-29 14:24:33.257936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:51.568 [2024-11-29 14:24:33.257943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:51.568 [2024-11-29 14:24:33.257952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:51.568 [2024-11-29 14:24:33.257957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:51.568 [2024-11-29 14:24:33.257964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:51.568 [2024-11-29 14:24:33.257969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:51.568 [2024-11-29 14:24:33.257976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.568 [2024-11-29 14:24:33.257982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:51.568 [2024-11-29 14:24:33.257990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:51.568 [2024-11-29 14:24:33.257995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.568 [2024-11-29 14:24:33.258002] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:51.568 [2024-11-29 14:24:33.258008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:51.568 [2024-11-29 14:24:33.258017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:51.568 [2024-11-29 14:24:33.258024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.568 [2024-11-29 14:24:33.258033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:51.568 [2024-11-29 14:24:33.258039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:51.568 [2024-11-29 14:24:33.258046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:51.568 [2024-11-29 14:24:33.258051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:51.568 [2024-11-29 14:24:33.258059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:51.568 [2024-11-29 14:24:33.258065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:51.568 [2024-11-29 14:24:33.258082] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:51.568 [2024-11-29 14:24:33.258090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:51.568 [2024-11-29 14:24:33.258099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:51.568 [2024-11-29 14:24:33.258105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:51.568 [2024-11-29 14:24:33.258111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:51.568 [2024-11-29 14:24:33.258117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:51.568 [2024-11-29 14:24:33.258123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:51.568 [2024-11-29 14:24:33.258128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:51.568 [2024-11-29 14:24:33.258137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:51.568 [2024-11-29 14:24:33.258142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:51.568 [2024-11-29 14:24:33.258150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:51.568 [2024-11-29 14:24:33.258155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:51.568 [2024-11-29 14:24:33.258162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:51.568 [2024-11-29 14:24:33.258167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:51.568 [2024-11-29 14:24:33.258173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:51.568 [2024-11-29 14:24:33.258178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:51.568 [2024-11-29 14:24:33.258184] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:51.568 [2024-11-29 14:24:33.258191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:51.568 [2024-11-29 14:24:33.258206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:51.568 [2024-11-29 14:24:33.258211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:51.568 [2024-11-29 14:24:33.258218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:51.568 [2024-11-29 14:24:33.258223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:51.568 [2024-11-29 14:24:33.258231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.568 [2024-11-29 14:24:33.258244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:51.568 [2024-11-29 14:24:33.258254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:16:51.568 [2024-11-29 14:24:33.258259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.568 [2024-11-29 14:24:33.258338] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:51.568 [2024-11-29 14:24:33.258346] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:54.857 [2024-11-29 14:24:35.941071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:35.941275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:54.857 [2024-11-29 14:24:35.941359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2682.719 ms 00:16:54.857 [2024-11-29 14:24:35.941384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:35.960581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:35.960649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:54.857 [2024-11-29 14:24:35.960676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.066 ms 00:16:54.857 [2024-11-29 14:24:35.960693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:35.960972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:35.960997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:54.857 [2024-11-29 14:24:35.961018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:16:54.857 [2024-11-29 14:24:35.961033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:35.972962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:35.972994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:54.857 [2024-11-29 14:24:35.973005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.850 ms 00:16:54.857 [2024-11-29 14:24:35.973013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:35.973088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:35.973097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:54.857 [2024-11-29 14:24:35.973107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:54.857 [2024-11-29 14:24:35.973115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:35.973431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:35.973445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:54.857 [2024-11-29 14:24:35.973456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:16:54.857 [2024-11-29 14:24:35.973463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:35.973625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:35.973645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:54.857 [2024-11-29 14:24:35.973656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:16:54.857 [2024-11-29 14:24:35.973665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:35.979176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:35.979204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:54.857 [2024-11-29 14:24:35.979215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.475 ms 00:16:54.857 [2024-11-29 14:24:35.979222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:35.987479] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:54.857 [2024-11-29 14:24:36.001897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:36.001932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:54.857 [2024-11-29 14:24:36.001942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.592 ms 00:16:54.857 [2024-11-29 14:24:36.001951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:36.070772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:36.070818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:54.857 [2024-11-29 14:24:36.070832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.741 ms 00:16:54.857 [2024-11-29 14:24:36.070844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:36.071059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:36.071074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:54.857 [2024-11-29 14:24:36.071085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:16:54.857 [2024-11-29 14:24:36.071094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:36.074524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:36.074559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:54.857 [2024-11-29 14:24:36.074569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.392 ms 00:16:54.857 [2024-11-29 14:24:36.074589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:36.077306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:36.077447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:54.857 [2024-11-29 14:24:36.077464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.665 ms 00:16:54.857 [2024-11-29 14:24:36.077474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:36.077807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:36.077821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:54.857 [2024-11-29 14:24:36.077832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:16:54.857 [2024-11-29 14:24:36.077843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:36.111657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:36.111812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:54.857 [2024-11-29 14:24:36.111830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.783 ms 00:16:54.857 [2024-11-29 14:24:36.111841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:36.116543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:36.116580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:54.857 [2024-11-29 14:24:36.116590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.606 ms 00:16:54.857 [2024-11-29 14:24:36.116602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.857 [2024-11-29 14:24:36.119892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.857 [2024-11-29 14:24:36.119924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:54.857 [2024-11-29 14:24:36.119933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.239 ms 00:16:54.858 [2024-11-29 14:24:36.119942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.858 [2024-11-29 14:24:36.124228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.858 [2024-11-29 14:24:36.124261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:54.858 [2024-11-29 14:24:36.124270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.233 ms 00:16:54.858 [2024-11-29 14:24:36.124280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.858 [2024-11-29 14:24:36.124354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.858 [2024-11-29 14:24:36.124367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:54.858 [2024-11-29 14:24:36.124387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:54.858 [2024-11-29 14:24:36.124398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.858 [2024-11-29 14:24:36.124484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.858 [2024-11-29 14:24:36.124514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:54.858 [2024-11-29 14:24:36.124532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:54.858 [2024-11-29 14:24:36.124541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.858 [2024-11-29 14:24:36.125424] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:54.858 [2024-11-29 14:24:36.126399] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2878.661 ms, result 0 00:16:54.858 [2024-11-29 14:24:36.127278] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:54.858 { 00:16:54.858 "name": "ftl0", 00:16:54.858 "uuid": "df3abf56-433e-4b26-bd7a-fcb295efe551" 00:16:54.858 } 00:16:54.858 14:24:36 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:54.858 14:24:36 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:54.858 14:24:36 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:54.858 14:24:36 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:54.858 14:24:36 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:54.858 14:24:36 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:54.858 14:24:36 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:54.858 14:24:36 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:54.858 [ 00:16:54.858 { 00:16:54.858 "name": "ftl0", 00:16:54.858 "aliases": [ 00:16:54.858 "df3abf56-433e-4b26-bd7a-fcb295efe551" 00:16:54.858 ], 00:16:54.858 "product_name": "FTL disk", 00:16:54.858 "block_size": 4096, 00:16:54.858 "num_blocks": 23592960, 00:16:54.858 "uuid": "df3abf56-433e-4b26-bd7a-fcb295efe551", 00:16:54.858 "assigned_rate_limits": { 00:16:54.858 "rw_ios_per_sec": 0, 00:16:54.858 "rw_mbytes_per_sec": 0, 00:16:54.858 "r_mbytes_per_sec": 0, 00:16:54.858 "w_mbytes_per_sec": 0 00:16:54.858 }, 00:16:54.858 "claimed": false, 00:16:54.858 "zoned": false, 00:16:54.858 "supported_io_types": { 00:16:54.858 "read": true, 00:16:54.858 "write": true, 00:16:54.858 "unmap": true, 00:16:54.858 "flush": true, 00:16:54.858 "reset": false, 00:16:54.858 "nvme_admin": false, 00:16:54.858 "nvme_io": false, 00:16:54.858 "nvme_io_md": false, 00:16:54.858 "write_zeroes": true, 00:16:54.858 "zcopy": false, 00:16:54.858 "get_zone_info": false, 00:16:54.858 "zone_management": false, 00:16:54.858 "zone_append": false, 00:16:54.858 "compare": false, 00:16:54.858 "compare_and_write": false, 00:16:54.858 "abort": false, 00:16:54.858 "seek_hole": false, 00:16:54.858 "seek_data": false, 00:16:54.858 "copy": false, 00:16:54.858 "nvme_iov_md": false 00:16:54.858 }, 00:16:54.858 "driver_specific": { 00:16:54.858 "ftl": { 00:16:54.858 "base_bdev": "3539bfd4-0558-4c1c-aeed-e1ad82095de3", 00:16:54.858 "cache": "nvc0n1p0" 00:16:54.858 } 00:16:54.858 } 00:16:54.858 } 00:16:54.858 ] 00:16:54.858 14:24:36 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:54.858 14:24:36 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:54.858 14:24:36 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:55.119 14:24:36 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:55.119 14:24:36 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:55.379 14:24:36 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:55.379 { 00:16:55.379 "name": "ftl0", 00:16:55.379 "aliases": [ 00:16:55.379 "df3abf56-433e-4b26-bd7a-fcb295efe551" 00:16:55.379 ], 00:16:55.379 "product_name": "FTL disk", 00:16:55.379 "block_size": 4096, 00:16:55.379 "num_blocks": 23592960, 00:16:55.379 "uuid": "df3abf56-433e-4b26-bd7a-fcb295efe551", 00:16:55.379 "assigned_rate_limits": { 00:16:55.379 "rw_ios_per_sec": 0, 00:16:55.379 "rw_mbytes_per_sec": 0, 00:16:55.379 "r_mbytes_per_sec": 0, 00:16:55.379 "w_mbytes_per_sec": 0 00:16:55.379 }, 00:16:55.379 "claimed": false, 00:16:55.379 "zoned": false, 00:16:55.379 "supported_io_types": { 00:16:55.379 "read": true, 00:16:55.379 "write": true, 00:16:55.379 "unmap": true, 00:16:55.379 "flush": true, 00:16:55.379 "reset": false, 00:16:55.379 "nvme_admin": false, 00:16:55.379 "nvme_io": false, 00:16:55.379 "nvme_io_md": false, 00:16:55.379 "write_zeroes": true, 00:16:55.379 "zcopy": false, 00:16:55.379 "get_zone_info": false, 00:16:55.379 "zone_management": false, 00:16:55.379 "zone_append": false, 00:16:55.379 "compare": false, 00:16:55.379 "compare_and_write": false, 00:16:55.379 "abort": false, 00:16:55.379 "seek_hole": false, 00:16:55.379 "seek_data": false, 00:16:55.379 "copy": false, 00:16:55.379 "nvme_iov_md": false 00:16:55.379 }, 00:16:55.379 "driver_specific": { 00:16:55.379 "ftl": { 00:16:55.379 "base_bdev": "3539bfd4-0558-4c1c-aeed-e1ad82095de3", 00:16:55.379 "cache": "nvc0n1p0" 00:16:55.379 } 00:16:55.379 } 00:16:55.379 } 00:16:55.379 ]' 00:16:55.379 14:24:36 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:55.379 14:24:36 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:55.379 14:24:36 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:55.379 [2024-11-29 14:24:37.159750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.379 [2024-11-29 14:24:37.159783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:55.379 [2024-11-29 14:24:37.159794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:55.379 [2024-11-29 14:24:37.159807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.379 [2024-11-29 14:24:37.159843] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:55.379 [2024-11-29 14:24:37.160253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.379 [2024-11-29 14:24:37.160271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:55.379 [2024-11-29 14:24:37.160278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:16:55.380 [2024-11-29 14:24:37.160285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.380 [2024-11-29 14:24:37.160838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.380 [2024-11-29 14:24:37.160849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:55.380 [2024-11-29 14:24:37.160857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:16:55.380 [2024-11-29 14:24:37.160867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.380 [2024-11-29 14:24:37.163578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.380 [2024-11-29 14:24:37.163596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:55.380 [2024-11-29 14:24:37.163603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.692 ms 00:16:55.380 [2024-11-29 14:24:37.163611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.380 [2024-11-29 14:24:37.168798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.380 [2024-11-29 14:24:37.168824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:55.380 [2024-11-29 14:24:37.168832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.150 ms 00:16:55.380 [2024-11-29 14:24:37.168842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.380 [2024-11-29 14:24:37.170650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.380 [2024-11-29 14:24:37.170679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:55.380 [2024-11-29 14:24:37.170686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:16:55.380 [2024-11-29 14:24:37.170693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.642 [2024-11-29 14:24:37.174946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.642 [2024-11-29 14:24:37.174978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:55.642 [2024-11-29 14:24:37.174986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.215 ms 00:16:55.642 [2024-11-29 14:24:37.174994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.642 [2024-11-29 14:24:37.175166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.642 [2024-11-29 14:24:37.175193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:55.642 [2024-11-29 14:24:37.175202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:16:55.642 [2024-11-29 14:24:37.175209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.642 [2024-11-29 14:24:37.177029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.642 [2024-11-29 14:24:37.177060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:55.642 [2024-11-29 14:24:37.177068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:16:55.642 [2024-11-29 14:24:37.177079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.642 [2024-11-29 14:24:37.178441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.642 [2024-11-29 14:24:37.178568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:55.642 [2024-11-29 14:24:37.178580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:16:55.642 [2024-11-29 14:24:37.178587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.642 [2024-11-29 14:24:37.180013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.642 [2024-11-29 14:24:37.180053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:55.642 [2024-11-29 14:24:37.180062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.175 ms 00:16:55.642 [2024-11-29 14:24:37.180070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.642 [2024-11-29 14:24:37.181166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.642 [2024-11-29 14:24:37.181196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:55.642 [2024-11-29 14:24:37.181203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:16:55.642 [2024-11-29 14:24:37.181209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.642 [2024-11-29 14:24:37.181262] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:55.642 [2024-11-29 14:24:37.181276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:55.642 [2024-11-29 14:24:37.181689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:55.643 [2024-11-29 14:24:37.181988] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:55.643 [2024-11-29 14:24:37.181995] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: df3abf56-433e-4b26-bd7a-fcb295efe551 00:16:55.643 [2024-11-29 14:24:37.182003] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:55.643 [2024-11-29 14:24:37.182008] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:55.643 [2024-11-29 14:24:37.182015] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:55.643 [2024-11-29 14:24:37.182020] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:55.643 [2024-11-29 14:24:37.182027] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:55.643 [2024-11-29 14:24:37.182036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:55.643 [2024-11-29 14:24:37.182044] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:55.643 [2024-11-29 14:24:37.182048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:55.643 [2024-11-29 14:24:37.182054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:55.643 [2024-11-29 14:24:37.182060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-11-29 14:24:37.182076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:55.643 [2024-11-29 14:24:37.182083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:16:55.643 [2024-11-29 14:24:37.182091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-11-29 14:24:37.183540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-11-29 14:24:37.183569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:55.643 [2024-11-29 14:24:37.183575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:16:55.643 [2024-11-29 14:24:37.183585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-11-29 14:24:37.183666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-11-29 14:24:37.183675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:55.643 [2024-11-29 14:24:37.183681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:55.643 [2024-11-29 14:24:37.183688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-11-29 14:24:37.188550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.643 [2024-11-29 14:24:37.188580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:55.643 [2024-11-29 14:24:37.188587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.643 [2024-11-29 14:24:37.188596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-11-29 14:24:37.188668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.643 [2024-11-29 14:24:37.188677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:55.643 [2024-11-29 14:24:37.188683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.643 [2024-11-29 14:24:37.188692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-11-29 14:24:37.188746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.643 [2024-11-29 14:24:37.188756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:55.643 [2024-11-29 14:24:37.188762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.643 [2024-11-29 14:24:37.188770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-11-29 14:24:37.188797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.643 [2024-11-29 14:24:37.188805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:55.643 [2024-11-29 14:24:37.188811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.643 [2024-11-29 14:24:37.188817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-11-29 14:24:37.197655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.643 [2024-11-29 14:24:37.197687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:55.643 [2024-11-29 14:24:37.197696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.643 [2024-11-29 14:24:37.197705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-11-29 14:24:37.204811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.643 [2024-11-29 14:24:37.204843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:55.643 [2024-11-29 14:24:37.204851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.643 [2024-11-29 14:24:37.204861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-11-29 14:24:37.204906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.643 [2024-11-29 14:24:37.204915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:55.643 [2024-11-29 14:24:37.204922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.643 [2024-11-29 14:24:37.204929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-11-29 14:24:37.204978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.643 [2024-11-29 14:24:37.204987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:55.643 [2024-11-29 14:24:37.204994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.643 [2024-11-29 14:24:37.205001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-11-29 14:24:37.205069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.643 [2024-11-29 14:24:37.205089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:55.644 [2024-11-29 14:24:37.205095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.644 [2024-11-29 14:24:37.205102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.644 [2024-11-29 14:24:37.205152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.644 [2024-11-29 14:24:37.205163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:55.644 [2024-11-29 14:24:37.205170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.644 [2024-11-29 14:24:37.205179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.644 [2024-11-29 14:24:37.205224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.644 [2024-11-29 14:24:37.205233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:55.644 [2024-11-29 14:24:37.205241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.644 [2024-11-29 14:24:37.205248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.644 [2024-11-29 14:24:37.205298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.644 [2024-11-29 14:24:37.205317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:55.644 [2024-11-29 14:24:37.205324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.644 [2024-11-29 14:24:37.205331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.644 [2024-11-29 14:24:37.205482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.727 ms, result 0 00:16:55.644 true 00:16:55.644 14:24:37 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 85424 00:16:55.644 14:24:37 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85424 ']' 00:16:55.644 14:24:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85424 00:16:55.644 14:24:37 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:55.644 14:24:37 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:55.644 14:24:37 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85424 00:16:55.644 killing process with pid 85424 00:16:55.644 14:24:37 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:55.644 14:24:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:55.644 14:24:37 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85424' 00:16:55.644 14:24:37 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85424 00:16:55.644 14:24:37 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85424 00:17:00.930 14:24:42 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:01.190 65536+0 records in 00:17:01.190 65536+0 records out 00:17:01.190 268435456 bytes (268 MB, 256 MiB) copied, 0.802364 s, 335 MB/s 00:17:01.190 14:24:42 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:01.190 [2024-11-29 14:24:42.940338] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:01.191 [2024-11-29 14:24:42.940582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85584 ] 00:17:01.450 [2024-11-29 14:24:43.088464] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.450 [2024-11-29 14:24:43.118996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.450 [2024-11-29 14:24:43.199891] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.450 [2024-11-29 14:24:43.199943] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.710 [2024-11-29 14:24:43.348846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.710 [2024-11-29 14:24:43.349005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:01.710 [2024-11-29 14:24:43.349021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:01.710 [2024-11-29 14:24:43.349034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.710 [2024-11-29 14:24:43.350779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.710 [2024-11-29 14:24:43.350806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.710 [2024-11-29 14:24:43.350818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.726 ms 00:17:01.710 [2024-11-29 14:24:43.350824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.710 [2024-11-29 14:24:43.350881] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:01.710 [2024-11-29 14:24:43.351152] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:01.710 [2024-11-29 14:24:43.351171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.710 [2024-11-29 14:24:43.351179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.710 [2024-11-29 14:24:43.351187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:01.710 [2024-11-29 14:24:43.351195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.710 [2024-11-29 14:24:43.352131] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:01.710 [2024-11-29 14:24:43.354248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.710 [2024-11-29 14:24:43.354281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:01.710 [2024-11-29 14:24:43.354291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.118 ms 00:17:01.710 [2024-11-29 14:24:43.354297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.710 [2024-11-29 14:24:43.354343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.710 [2024-11-29 14:24:43.354351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:01.710 [2024-11-29 14:24:43.354358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:01.710 [2024-11-29 14:24:43.354363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.710 [2024-11-29 14:24:43.358653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.710 [2024-11-29 14:24:43.358760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.710 [2024-11-29 14:24:43.358776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.261 ms 00:17:01.710 [2024-11-29 14:24:43.358782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.710 [2024-11-29 14:24:43.358866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.358876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.711 [2024-11-29 14:24:43.358884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:01.711 [2024-11-29 14:24:43.358893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.358928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.358937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:01.711 [2024-11-29 14:24:43.358944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:01.711 [2024-11-29 14:24:43.358950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.358968] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:01.711 [2024-11-29 14:24:43.360098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.360125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:01.711 [2024-11-29 14:24:43.360132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:17:01.711 [2024-11-29 14:24:43.360138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.360165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.360174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:01.711 [2024-11-29 14:24:43.360180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:01.711 [2024-11-29 14:24:43.360190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.360203] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:01.711 [2024-11-29 14:24:43.360216] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:01.711 [2024-11-29 14:24:43.360241] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:01.711 [2024-11-29 14:24:43.360255] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:01.711 [2024-11-29 14:24:43.360334] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:01.711 [2024-11-29 14:24:43.360342] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:01.711 [2024-11-29 14:24:43.360353] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:01.711 [2024-11-29 14:24:43.360363] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:01.711 [2024-11-29 14:24:43.360370] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:01.711 [2024-11-29 14:24:43.360378] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:01.711 [2024-11-29 14:24:43.360384] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:01.711 [2024-11-29 14:24:43.360389] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:01.711 [2024-11-29 14:24:43.360394] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:01.711 [2024-11-29 14:24:43.360400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.360405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:01.711 [2024-11-29 14:24:43.360414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:17:01.711 [2024-11-29 14:24:43.360419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.360487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.360511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:01.711 [2024-11-29 14:24:43.360520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:01.711 [2024-11-29 14:24:43.360526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.360604] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:01.711 [2024-11-29 14:24:43.360615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:01.711 [2024-11-29 14:24:43.360621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.711 [2024-11-29 14:24:43.360629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.711 [2024-11-29 14:24:43.360636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:01.711 [2024-11-29 14:24:43.360641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:01.711 [2024-11-29 14:24:43.360647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:01.711 [2024-11-29 14:24:43.360654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:01.711 [2024-11-29 14:24:43.360660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:01.711 [2024-11-29 14:24:43.360665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.711 [2024-11-29 14:24:43.360672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:01.711 [2024-11-29 14:24:43.360677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:01.711 [2024-11-29 14:24:43.360688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.711 [2024-11-29 14:24:43.360694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:01.711 [2024-11-29 14:24:43.360699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:01.711 [2024-11-29 14:24:43.360705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.711 [2024-11-29 14:24:43.360710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:01.711 [2024-11-29 14:24:43.360715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:01.711 [2024-11-29 14:24:43.360719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.711 [2024-11-29 14:24:43.360725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:01.711 [2024-11-29 14:24:43.360729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:01.711 [2024-11-29 14:24:43.360734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.711 [2024-11-29 14:24:43.360739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:01.711 [2024-11-29 14:24:43.360744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:01.711 [2024-11-29 14:24:43.360749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.711 [2024-11-29 14:24:43.360755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:01.711 [2024-11-29 14:24:43.360764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:01.711 [2024-11-29 14:24:43.360770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.711 [2024-11-29 14:24:43.360776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:01.711 [2024-11-29 14:24:43.360781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:01.711 [2024-11-29 14:24:43.360787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.711 [2024-11-29 14:24:43.360792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:01.711 [2024-11-29 14:24:43.360798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:01.711 [2024-11-29 14:24:43.360803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.711 [2024-11-29 14:24:43.360809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:01.711 [2024-11-29 14:24:43.360814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:01.711 [2024-11-29 14:24:43.360820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.711 [2024-11-29 14:24:43.360826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:01.711 [2024-11-29 14:24:43.360832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:01.711 [2024-11-29 14:24:43.360840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.711 [2024-11-29 14:24:43.360846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:01.711 [2024-11-29 14:24:43.360852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:01.711 [2024-11-29 14:24:43.360859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.711 [2024-11-29 14:24:43.360865] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:01.711 [2024-11-29 14:24:43.360872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:01.711 [2024-11-29 14:24:43.360878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.711 [2024-11-29 14:24:43.360884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.711 [2024-11-29 14:24:43.360893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:01.711 [2024-11-29 14:24:43.360900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:01.711 [2024-11-29 14:24:43.360905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:01.711 [2024-11-29 14:24:43.360911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:01.711 [2024-11-29 14:24:43.360917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:01.711 [2024-11-29 14:24:43.360923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:01.711 [2024-11-29 14:24:43.360930] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:01.711 [2024-11-29 14:24:43.360938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.711 [2024-11-29 14:24:43.360946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:01.711 [2024-11-29 14:24:43.360952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:01.711 [2024-11-29 14:24:43.360958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:01.711 [2024-11-29 14:24:43.360967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:01.711 [2024-11-29 14:24:43.360973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:01.711 [2024-11-29 14:24:43.360980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:01.711 [2024-11-29 14:24:43.360986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:01.711 [2024-11-29 14:24:43.360992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:01.711 [2024-11-29 14:24:43.360999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:01.711 [2024-11-29 14:24:43.361005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:01.711 [2024-11-29 14:24:43.361012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:01.711 [2024-11-29 14:24:43.361018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:01.711 [2024-11-29 14:24:43.361024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:01.711 [2024-11-29 14:24:43.361031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:01.711 [2024-11-29 14:24:43.361037] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:01.711 [2024-11-29 14:24:43.361044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.711 [2024-11-29 14:24:43.361051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:01.711 [2024-11-29 14:24:43.361058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:01.711 [2024-11-29 14:24:43.361064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:01.711 [2024-11-29 14:24:43.361072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:01.711 [2024-11-29 14:24:43.361078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.361087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:01.711 [2024-11-29 14:24:43.361095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:17:01.711 [2024-11-29 14:24:43.361103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.380342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.380379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:01.711 [2024-11-29 14:24:43.380397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.191 ms 00:17:01.711 [2024-11-29 14:24:43.380405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.380550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.380563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:01.711 [2024-11-29 14:24:43.380572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:01.711 [2024-11-29 14:24:43.380582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.388914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.388950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:01.711 [2024-11-29 14:24:43.388962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.310 ms 00:17:01.711 [2024-11-29 14:24:43.388971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.389024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.389035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:01.711 [2024-11-29 14:24:43.389056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:01.711 [2024-11-29 14:24:43.389065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.389379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.389395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:01.711 [2024-11-29 14:24:43.389406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:17:01.711 [2024-11-29 14:24:43.389419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.389589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.389607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:01.711 [2024-11-29 14:24:43.389623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:17:01.711 [2024-11-29 14:24:43.389636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.394656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.394693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:01.711 [2024-11-29 14:24:43.394704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.994 ms 00:17:01.711 [2024-11-29 14:24:43.394713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.397264] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:01.711 [2024-11-29 14:24:43.397292] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:01.711 [2024-11-29 14:24:43.397305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.397310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:01.711 [2024-11-29 14:24:43.397316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.485 ms 00:17:01.711 [2024-11-29 14:24:43.397322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.408709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.408733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:01.711 [2024-11-29 14:24:43.408742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.347 ms 00:17:01.711 [2024-11-29 14:24:43.408751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.410381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.410406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:01.711 [2024-11-29 14:24:43.410413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:17:01.711 [2024-11-29 14:24:43.410419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.411752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.411857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:01.711 [2024-11-29 14:24:43.411869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:17:01.711 [2024-11-29 14:24:43.411875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.711 [2024-11-29 14:24:43.412116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.711 [2024-11-29 14:24:43.412126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:01.711 [2024-11-29 14:24:43.412132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:17:01.711 [2024-11-29 14:24:43.412138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.712 [2024-11-29 14:24:43.426928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.712 [2024-11-29 14:24:43.427060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:01.712 [2024-11-29 14:24:43.427074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.772 ms 00:17:01.712 [2024-11-29 14:24:43.427080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.712 [2024-11-29 14:24:43.432868] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:01.712 [2024-11-29 14:24:43.444580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.712 [2024-11-29 14:24:43.444609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:01.712 [2024-11-29 14:24:43.444624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.453 ms 00:17:01.712 [2024-11-29 14:24:43.444630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.712 [2024-11-29 14:24:43.444708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.712 [2024-11-29 14:24:43.444716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:01.712 [2024-11-29 14:24:43.444723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:01.712 [2024-11-29 14:24:43.444728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.712 [2024-11-29 14:24:43.444763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.712 [2024-11-29 14:24:43.444775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:01.712 [2024-11-29 14:24:43.444782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:01.712 [2024-11-29 14:24:43.444787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.712 [2024-11-29 14:24:43.444806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.712 [2024-11-29 14:24:43.444812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:01.712 [2024-11-29 14:24:43.444818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:01.712 [2024-11-29 14:24:43.444823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.712 [2024-11-29 14:24:43.444846] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:01.712 [2024-11-29 14:24:43.444853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.712 [2024-11-29 14:24:43.444859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:01.712 [2024-11-29 14:24:43.444867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:01.712 [2024-11-29 14:24:43.444877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.712 [2024-11-29 14:24:43.448636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.712 [2024-11-29 14:24:43.448664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:01.712 [2024-11-29 14:24:43.448672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.744 ms 00:17:01.712 [2024-11-29 14:24:43.448678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.712 [2024-11-29 14:24:43.448745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.712 [2024-11-29 14:24:43.448753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:01.712 [2024-11-29 14:24:43.448762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:01.712 [2024-11-29 14:24:43.448768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.712 [2024-11-29 14:24:43.449368] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:01.712 [2024-11-29 14:24:43.450296] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 100.309 ms, result 0 00:17:01.712 [2024-11-29 14:24:43.451221] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:01.712 [2024-11-29 14:24:43.461079] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:03.096  [2024-11-29T14:24:45.832Z] Copying: 20/256 [MB] (20 MBps) [2024-11-29T14:24:46.775Z] Copying: 42/256 [MB] (22 MBps) [2024-11-29T14:24:47.719Z] Copying: 64/256 [MB] (22 MBps) [2024-11-29T14:24:48.663Z] Copying: 86/256 [MB] (21 MBps) [2024-11-29T14:24:49.607Z] Copying: 126/256 [MB] (40 MBps) [2024-11-29T14:24:50.550Z] Copying: 145/256 [MB] (18 MBps) [2024-11-29T14:24:51.493Z] Copying: 165/256 [MB] (19 MBps) [2024-11-29T14:24:52.881Z] Copying: 194/256 [MB] (29 MBps) [2024-11-29T14:24:53.827Z] Copying: 232/256 [MB] (38 MBps) [2024-11-29T14:24:53.827Z] Copying: 252/256 [MB] (20 MBps) [2024-11-29T14:24:53.827Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-29 14:24:53.640372] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:12.033 [2024-11-29 14:24:53.642098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.033 [2024-11-29 14:24:53.642147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:12.033 [2024-11-29 14:24:53.642162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:12.033 [2024-11-29 14:24:53.642178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.033 [2024-11-29 14:24:53.642200] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:12.033 [2024-11-29 14:24:53.642876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.033 [2024-11-29 14:24:53.642900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:12.033 [2024-11-29 14:24:53.642944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:17:12.033 [2024-11-29 14:24:53.642955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.033 [2024-11-29 14:24:53.646002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.033 [2024-11-29 14:24:53.646045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:12.033 [2024-11-29 14:24:53.646056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.020 ms 00:17:12.033 [2024-11-29 14:24:53.646065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.033 [2024-11-29 14:24:53.660481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.033 [2024-11-29 14:24:53.660577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:12.033 [2024-11-29 14:24:53.660596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.392 ms 00:17:12.033 [2024-11-29 14:24:53.660606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.033 [2024-11-29 14:24:53.667707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.033 [2024-11-29 14:24:53.667759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:12.033 [2024-11-29 14:24:53.667772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.010 ms 00:17:12.033 [2024-11-29 14:24:53.667781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.033 [2024-11-29 14:24:53.671349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.033 [2024-11-29 14:24:53.671405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:12.033 [2024-11-29 14:24:53.671417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.496 ms 00:17:12.033 [2024-11-29 14:24:53.671425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.033 [2024-11-29 14:24:53.677826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.033 [2024-11-29 14:24:53.678246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:12.033 [2024-11-29 14:24:53.678295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.301 ms 00:17:12.033 [2024-11-29 14:24:53.678333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.033 [2024-11-29 14:24:53.678720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.033 [2024-11-29 14:24:53.678754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:12.033 [2024-11-29 14:24:53.678778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:17:12.033 [2024-11-29 14:24:53.678799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.033 [2024-11-29 14:24:53.682557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.033 [2024-11-29 14:24:53.682607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:12.033 [2024-11-29 14:24:53.682618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.712 ms 00:17:12.033 [2024-11-29 14:24:53.682627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.033 [2024-11-29 14:24:53.685873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.033 [2024-11-29 14:24:53.685925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:12.033 [2024-11-29 14:24:53.685935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.189 ms 00:17:12.033 [2024-11-29 14:24:53.685943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.033 [2024-11-29 14:24:53.688271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.033 [2024-11-29 14:24:53.688322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:12.033 [2024-11-29 14:24:53.688332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.276 ms 00:17:12.033 [2024-11-29 14:24:53.688339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.033 [2024-11-29 14:24:53.690576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.033 [2024-11-29 14:24:53.690624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:12.033 [2024-11-29 14:24:53.690635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.150 ms 00:17:12.033 [2024-11-29 14:24:53.690643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.033 [2024-11-29 14:24:53.690689] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:12.033 [2024-11-29 14:24:53.690706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:12.033 [2024-11-29 14:24:53.690725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.690994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:12.034 [2024-11-29 14:24:53.691487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:12.035 [2024-11-29 14:24:53.691521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:12.035 [2024-11-29 14:24:53.691529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:12.035 [2024-11-29 14:24:53.691536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:12.035 [2024-11-29 14:24:53.691545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:12.035 [2024-11-29 14:24:53.691557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:12.035 [2024-11-29 14:24:53.691567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:12.035 [2024-11-29 14:24:53.691575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:12.035 [2024-11-29 14:24:53.691582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:12.035 [2024-11-29 14:24:53.691591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:12.035 [2024-11-29 14:24:53.691608] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:12.035 [2024-11-29 14:24:53.691616] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: df3abf56-433e-4b26-bd7a-fcb295efe551 00:17:12.035 [2024-11-29 14:24:53.691633] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:12.035 [2024-11-29 14:24:53.691659] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:12.035 [2024-11-29 14:24:53.691667] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:12.035 [2024-11-29 14:24:53.691677] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:12.035 [2024-11-29 14:24:53.691685] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:12.035 [2024-11-29 14:24:53.691693] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:12.035 [2024-11-29 14:24:53.691704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:12.035 [2024-11-29 14:24:53.691711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:12.035 [2024-11-29 14:24:53.691718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:12.035 [2024-11-29 14:24:53.691726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.035 [2024-11-29 14:24:53.691734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:12.035 [2024-11-29 14:24:53.691745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:17:12.035 [2024-11-29 14:24:53.691757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.694189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.035 [2024-11-29 14:24:53.694226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:12.035 [2024-11-29 14:24:53.694249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.394 ms 00:17:12.035 [2024-11-29 14:24:53.694258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.694378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.035 [2024-11-29 14:24:53.694390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:12.035 [2024-11-29 14:24:53.694405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:12.035 [2024-11-29 14:24:53.694418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.702090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.035 [2024-11-29 14:24:53.702142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:12.035 [2024-11-29 14:24:53.702154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.035 [2024-11-29 14:24:53.702163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.702245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.035 [2024-11-29 14:24:53.702256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:12.035 [2024-11-29 14:24:53.702274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.035 [2024-11-29 14:24:53.702282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.702329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.035 [2024-11-29 14:24:53.702347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:12.035 [2024-11-29 14:24:53.702355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.035 [2024-11-29 14:24:53.702364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.702382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.035 [2024-11-29 14:24:53.702391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:12.035 [2024-11-29 14:24:53.702398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.035 [2024-11-29 14:24:53.702409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.716254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.035 [2024-11-29 14:24:53.716304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:12.035 [2024-11-29 14:24:53.716315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.035 [2024-11-29 14:24:53.716323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.726507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.035 [2024-11-29 14:24:53.726550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:12.035 [2024-11-29 14:24:53.726569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.035 [2024-11-29 14:24:53.726577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.726623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.035 [2024-11-29 14:24:53.726632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:12.035 [2024-11-29 14:24:53.726641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.035 [2024-11-29 14:24:53.726648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.726679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.035 [2024-11-29 14:24:53.726687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:12.035 [2024-11-29 14:24:53.726695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.035 [2024-11-29 14:24:53.726703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.726783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.035 [2024-11-29 14:24:53.726795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:12.035 [2024-11-29 14:24:53.726804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.035 [2024-11-29 14:24:53.726812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.726848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.035 [2024-11-29 14:24:53.726861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:12.035 [2024-11-29 14:24:53.726869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.035 [2024-11-29 14:24:53.726877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.726948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.035 [2024-11-29 14:24:53.726960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:12.035 [2024-11-29 14:24:53.726969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.035 [2024-11-29 14:24:53.726981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.727031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.035 [2024-11-29 14:24:53.727042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:12.035 [2024-11-29 14:24:53.727050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.035 [2024-11-29 14:24:53.727058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.035 [2024-11-29 14:24:53.727211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.085 ms, result 0 00:17:12.296 00:17:12.296 00:17:12.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.296 14:24:54 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85703 00:17:12.296 14:24:54 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85703 00:17:12.297 14:24:54 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85703 ']' 00:17:12.297 14:24:54 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.297 14:24:54 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:12.297 14:24:54 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.297 14:24:54 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:12.297 14:24:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:12.297 14:24:54 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:12.558 [2024-11-29 14:24:54.169914] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:12.558 [2024-11-29 14:24:54.170065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85703 ] 00:17:12.558 [2024-11-29 14:24:54.322423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.819 [2024-11-29 14:24:54.374005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.392 14:24:55 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.392 14:24:55 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:13.392 14:24:55 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:13.654 [2024-11-29 14:24:55.243668] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:13.654 [2024-11-29 14:24:55.243749] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:13.654 [2024-11-29 14:24:55.416943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.654 [2024-11-29 14:24:55.417007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:13.654 [2024-11-29 14:24:55.417022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:13.654 [2024-11-29 14:24:55.417035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.654 [2024-11-29 14:24:55.419645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.654 [2024-11-29 14:24:55.419702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:13.654 [2024-11-29 14:24:55.419716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.588 ms 00:17:13.654 [2024-11-29 14:24:55.419729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.654 [2024-11-29 14:24:55.419837] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:13.654 [2024-11-29 14:24:55.420135] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:13.654 [2024-11-29 14:24:55.420153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.654 [2024-11-29 14:24:55.420163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:13.654 [2024-11-29 14:24:55.420176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:17:13.655 [2024-11-29 14:24:55.420186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.655 [2024-11-29 14:24:55.422049] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:13.655 [2024-11-29 14:24:55.426186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.655 [2024-11-29 14:24:55.426242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:13.655 [2024-11-29 14:24:55.426256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.134 ms 00:17:13.655 [2024-11-29 14:24:55.426264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.655 [2024-11-29 14:24:55.426369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.655 [2024-11-29 14:24:55.426381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:13.655 [2024-11-29 14:24:55.426396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:13.655 [2024-11-29 14:24:55.426403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.655 [2024-11-29 14:24:55.434945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.655 [2024-11-29 14:24:55.434989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:13.655 [2024-11-29 14:24:55.435003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.485 ms 00:17:13.655 [2024-11-29 14:24:55.435011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.655 [2024-11-29 14:24:55.435131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.655 [2024-11-29 14:24:55.435142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:13.655 [2024-11-29 14:24:55.435155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:13.655 [2024-11-29 14:24:55.435163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.655 [2024-11-29 14:24:55.435195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.655 [2024-11-29 14:24:55.435205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:13.655 [2024-11-29 14:24:55.435216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:13.655 [2024-11-29 14:24:55.435227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.655 [2024-11-29 14:24:55.435254] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:13.655 [2024-11-29 14:24:55.437481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.655 [2024-11-29 14:24:55.437552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:13.655 [2024-11-29 14:24:55.437563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.235 ms 00:17:13.655 [2024-11-29 14:24:55.437573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.655 [2024-11-29 14:24:55.437624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.655 [2024-11-29 14:24:55.437634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:13.655 [2024-11-29 14:24:55.437648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:13.655 [2024-11-29 14:24:55.437658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.655 [2024-11-29 14:24:55.437682] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:13.655 [2024-11-29 14:24:55.437705] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:13.655 [2024-11-29 14:24:55.437749] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:13.655 [2024-11-29 14:24:55.437771] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:13.655 [2024-11-29 14:24:55.437879] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:13.655 [2024-11-29 14:24:55.437895] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:13.655 [2024-11-29 14:24:55.437906] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:13.655 [2024-11-29 14:24:55.437924] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:13.655 [2024-11-29 14:24:55.437935] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:13.655 [2024-11-29 14:24:55.437947] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:13.655 [2024-11-29 14:24:55.437956] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:13.655 [2024-11-29 14:24:55.437965] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:13.655 [2024-11-29 14:24:55.437975] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:13.655 [2024-11-29 14:24:55.437986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.655 [2024-11-29 14:24:55.437996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:13.655 [2024-11-29 14:24:55.438007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:17:13.655 [2024-11-29 14:24:55.438014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.655 [2024-11-29 14:24:55.438105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.655 [2024-11-29 14:24:55.438116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:13.655 [2024-11-29 14:24:55.438127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:13.655 [2024-11-29 14:24:55.438140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.655 [2024-11-29 14:24:55.438244] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:13.655 [2024-11-29 14:24:55.438256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:13.655 [2024-11-29 14:24:55.438273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:13.655 [2024-11-29 14:24:55.438284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.655 [2024-11-29 14:24:55.438297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:13.655 [2024-11-29 14:24:55.438306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:13.655 [2024-11-29 14:24:55.438318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:13.655 [2024-11-29 14:24:55.438329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:13.655 [2024-11-29 14:24:55.438348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:13.655 [2024-11-29 14:24:55.438356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:13.655 [2024-11-29 14:24:55.438366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:13.655 [2024-11-29 14:24:55.438373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:13.655 [2024-11-29 14:24:55.438384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:13.655 [2024-11-29 14:24:55.438393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:13.655 [2024-11-29 14:24:55.438403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:13.655 [2024-11-29 14:24:55.438411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.655 [2024-11-29 14:24:55.438424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:13.655 [2024-11-29 14:24:55.438434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:13.655 [2024-11-29 14:24:55.438444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.655 [2024-11-29 14:24:55.438452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:13.655 [2024-11-29 14:24:55.438464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:13.655 [2024-11-29 14:24:55.438471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.655 [2024-11-29 14:24:55.438483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:13.655 [2024-11-29 14:24:55.438514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:13.655 [2024-11-29 14:24:55.438524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.655 [2024-11-29 14:24:55.438531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:13.655 [2024-11-29 14:24:55.438540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:13.655 [2024-11-29 14:24:55.438547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.655 [2024-11-29 14:24:55.438557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:13.655 [2024-11-29 14:24:55.438565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:13.655 [2024-11-29 14:24:55.438573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.655 [2024-11-29 14:24:55.438580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:13.655 [2024-11-29 14:24:55.438589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:13.655 [2024-11-29 14:24:55.438596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:13.655 [2024-11-29 14:24:55.438605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:13.655 [2024-11-29 14:24:55.438613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:13.655 [2024-11-29 14:24:55.438623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:13.655 [2024-11-29 14:24:55.438630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:13.655 [2024-11-29 14:24:55.438639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:13.655 [2024-11-29 14:24:55.438646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.655 [2024-11-29 14:24:55.438656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:13.655 [2024-11-29 14:24:55.438664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:13.655 [2024-11-29 14:24:55.438673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.655 [2024-11-29 14:24:55.438680] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:13.655 [2024-11-29 14:24:55.438691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:13.655 [2024-11-29 14:24:55.438701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:13.655 [2024-11-29 14:24:55.438712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.655 [2024-11-29 14:24:55.438720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:13.655 [2024-11-29 14:24:55.438728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:13.655 [2024-11-29 14:24:55.438736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:13.656 [2024-11-29 14:24:55.438745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:13.656 [2024-11-29 14:24:55.438752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:13.656 [2024-11-29 14:24:55.438765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:13.656 [2024-11-29 14:24:55.438775] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:13.656 [2024-11-29 14:24:55.438787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:13.656 [2024-11-29 14:24:55.438796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:13.656 [2024-11-29 14:24:55.438806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:13.656 [2024-11-29 14:24:55.438814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:13.656 [2024-11-29 14:24:55.438837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:13.656 [2024-11-29 14:24:55.438845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:13.656 [2024-11-29 14:24:55.438854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:13.656 [2024-11-29 14:24:55.438862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:13.656 [2024-11-29 14:24:55.438872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:13.656 [2024-11-29 14:24:55.438879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:13.656 [2024-11-29 14:24:55.438888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:13.656 [2024-11-29 14:24:55.438895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:13.656 [2024-11-29 14:24:55.438918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:13.656 [2024-11-29 14:24:55.438925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:13.656 [2024-11-29 14:24:55.438937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:13.656 [2024-11-29 14:24:55.438945] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:13.656 [2024-11-29 14:24:55.438955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:13.656 [2024-11-29 14:24:55.438966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:13.656 [2024-11-29 14:24:55.438975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:13.656 [2024-11-29 14:24:55.438984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:13.656 [2024-11-29 14:24:55.438993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:13.656 [2024-11-29 14:24:55.439001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.656 [2024-11-29 14:24:55.439016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:13.656 [2024-11-29 14:24:55.439025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:17:13.656 [2024-11-29 14:24:55.439036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.453766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.453814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:13.917 [2024-11-29 14:24:55.453827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.648 ms 00:17:13.917 [2024-11-29 14:24:55.453839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.453972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.453987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:13.917 [2024-11-29 14:24:55.453997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:13.917 [2024-11-29 14:24:55.454009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.465801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.465853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:13.917 [2024-11-29 14:24:55.465868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.769 ms 00:17:13.917 [2024-11-29 14:24:55.465879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.465951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.465966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:13.917 [2024-11-29 14:24:55.465975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:13.917 [2024-11-29 14:24:55.465984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.466454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.466511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:13.917 [2024-11-29 14:24:55.466523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:17:13.917 [2024-11-29 14:24:55.466535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.466687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.466710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:13.917 [2024-11-29 14:24:55.466723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:17:13.917 [2024-11-29 14:24:55.466736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.492171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.492409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:13.917 [2024-11-29 14:24:55.492437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.408 ms 00:17:13.917 [2024-11-29 14:24:55.492448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.496636] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:13.917 [2024-11-29 14:24:55.496694] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:13.917 [2024-11-29 14:24:55.496708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.496720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:13.917 [2024-11-29 14:24:55.496730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.092 ms 00:17:13.917 [2024-11-29 14:24:55.496741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.512830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.512886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:13.917 [2024-11-29 14:24:55.512900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.001 ms 00:17:13.917 [2024-11-29 14:24:55.512914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.516248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.516307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:13.917 [2024-11-29 14:24:55.516318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.211 ms 00:17:13.917 [2024-11-29 14:24:55.516329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.519245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.519444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:13.917 [2024-11-29 14:24:55.519463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.859 ms 00:17:13.917 [2024-11-29 14:24:55.519472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.520142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.520198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:13.917 [2024-11-29 14:24:55.520213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:17:13.917 [2024-11-29 14:24:55.520229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.545039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.545108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:13.917 [2024-11-29 14:24:55.545123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.785 ms 00:17:13.917 [2024-11-29 14:24:55.545136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.553277] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:13.917 [2024-11-29 14:24:55.571838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.571887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:13.917 [2024-11-29 14:24:55.571903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.600 ms 00:17:13.917 [2024-11-29 14:24:55.571912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.572002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.572013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:13.917 [2024-11-29 14:24:55.572025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:13.917 [2024-11-29 14:24:55.572036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.572095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.572111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:13.917 [2024-11-29 14:24:55.572125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:13.917 [2024-11-29 14:24:55.572134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.572162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.572171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:13.917 [2024-11-29 14:24:55.572187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:13.917 [2024-11-29 14:24:55.572195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.572238] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:13.917 [2024-11-29 14:24:55.572248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.572262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:13.917 [2024-11-29 14:24:55.572269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:13.917 [2024-11-29 14:24:55.572279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.578212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.578270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:13.917 [2024-11-29 14:24:55.578281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.906 ms 00:17:13.917 [2024-11-29 14:24:55.578292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.578405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.917 [2024-11-29 14:24:55.578419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:13.917 [2024-11-29 14:24:55.578432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:13.917 [2024-11-29 14:24:55.578442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.917 [2024-11-29 14:24:55.579860] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:13.917 [2024-11-29 14:24:55.581190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 162.554 ms, result 0 00:17:13.917 [2024-11-29 14:24:55.583522] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:13.917 Some configs were skipped because the RPC state that can call them passed over. 00:17:13.917 14:24:55 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:14.179 [2024-11-29 14:24:55.828150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.179 [2024-11-29 14:24:55.828369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:14.179 [2024-11-29 14:24:55.828400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.182 ms 00:17:14.179 [2024-11-29 14:24:55.828410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.179 [2024-11-29 14:24:55.828462] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.504 ms, result 0 00:17:14.179 true 00:17:14.179 14:24:55 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:14.440 [2024-11-29 14:24:56.039865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.440 [2024-11-29 14:24:56.039930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:14.440 [2024-11-29 14:24:56.039943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:17:14.440 [2024-11-29 14:24:56.039953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.440 [2024-11-29 14:24:56.039992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.734 ms, result 0 00:17:14.440 true 00:17:14.440 14:24:56 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85703 00:17:14.440 14:24:56 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85703 ']' 00:17:14.440 14:24:56 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85703 00:17:14.440 14:24:56 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:14.440 14:24:56 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:14.440 14:24:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85703 00:17:14.440 killing process with pid 85703 00:17:14.440 14:24:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:14.440 14:24:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:14.440 14:24:56 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85703' 00:17:14.440 14:24:56 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85703 00:17:14.440 14:24:56 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85703 00:17:14.440 [2024-11-29 14:24:56.216924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.440 [2024-11-29 14:24:56.216980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:14.440 [2024-11-29 14:24:56.216994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:14.440 [2024-11-29 14:24:56.217002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.440 [2024-11-29 14:24:56.217032] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:14.440 [2024-11-29 14:24:56.217520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.440 [2024-11-29 14:24:56.217543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:14.440 [2024-11-29 14:24:56.217553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:17:14.440 [2024-11-29 14:24:56.217562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.440 [2024-11-29 14:24:56.217845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.440 [2024-11-29 14:24:56.217867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:14.440 [2024-11-29 14:24:56.217877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:17:14.440 [2024-11-29 14:24:56.217893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.440 [2024-11-29 14:24:56.222573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.441 [2024-11-29 14:24:56.222612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:14.441 [2024-11-29 14:24:56.222622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.659 ms 00:17:14.441 [2024-11-29 14:24:56.222631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.441 [2024-11-29 14:24:56.229547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.441 [2024-11-29 14:24:56.229720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:14.441 [2024-11-29 14:24:56.229736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.880 ms 00:17:14.441 [2024-11-29 14:24:56.229748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.441 [2024-11-29 14:24:56.231994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.441 [2024-11-29 14:24:56.232036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:14.441 [2024-11-29 14:24:56.232046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.171 ms 00:17:14.441 [2024-11-29 14:24:56.232055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.703 [2024-11-29 14:24:56.236395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.703 [2024-11-29 14:24:56.236439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:14.703 [2024-11-29 14:24:56.236453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.301 ms 00:17:14.703 [2024-11-29 14:24:56.236463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.703 [2024-11-29 14:24:56.236615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.703 [2024-11-29 14:24:56.236635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:14.703 [2024-11-29 14:24:56.236643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:14.703 [2024-11-29 14:24:56.236652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.703 [2024-11-29 14:24:56.239562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.703 [2024-11-29 14:24:56.239601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:14.704 [2024-11-29 14:24:56.239610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.891 ms 00:17:14.704 [2024-11-29 14:24:56.239622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.704 [2024-11-29 14:24:56.242019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.704 [2024-11-29 14:24:56.242060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:14.704 [2024-11-29 14:24:56.242069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.360 ms 00:17:14.704 [2024-11-29 14:24:56.242078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.704 [2024-11-29 14:24:56.243938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.704 [2024-11-29 14:24:56.243980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:14.704 [2024-11-29 14:24:56.243990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.810 ms 00:17:14.704 [2024-11-29 14:24:56.243998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.704 [2024-11-29 14:24:56.246123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.704 [2024-11-29 14:24:56.246168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:14.704 [2024-11-29 14:24:56.246177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:17:14.704 [2024-11-29 14:24:56.246186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.704 [2024-11-29 14:24:56.246235] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:14.704 [2024-11-29 14:24:56.246252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:14.704 [2024-11-29 14:24:56.246955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.246965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.246972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.246981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.246989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.246998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:14.705 [2024-11-29 14:24:56.247187] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:14.705 [2024-11-29 14:24:56.247195] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: df3abf56-433e-4b26-bd7a-fcb295efe551 00:17:14.705 [2024-11-29 14:24:56.247205] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:14.705 [2024-11-29 14:24:56.247213] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:14.705 [2024-11-29 14:24:56.247224] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:14.705 [2024-11-29 14:24:56.247235] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:14.705 [2024-11-29 14:24:56.247245] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:14.705 [2024-11-29 14:24:56.247253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:14.705 [2024-11-29 14:24:56.247265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:14.705 [2024-11-29 14:24:56.247271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:14.705 [2024-11-29 14:24:56.247279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:14.705 [2024-11-29 14:24:56.247286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.705 [2024-11-29 14:24:56.247295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:14.705 [2024-11-29 14:24:56.247304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:17:14.705 [2024-11-29 14:24:56.247314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.249330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.705 [2024-11-29 14:24:56.249450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:14.705 [2024-11-29 14:24:56.249520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.996 ms 00:17:14.705 [2024-11-29 14:24:56.249547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.249655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.705 [2024-11-29 14:24:56.249740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:14.705 [2024-11-29 14:24:56.249800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:14.705 [2024-11-29 14:24:56.249825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.256248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.705 [2024-11-29 14:24:56.256377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:14.705 [2024-11-29 14:24:56.256428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.705 [2024-11-29 14:24:56.256453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.256560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.705 [2024-11-29 14:24:56.256588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:14.705 [2024-11-29 14:24:56.256610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.705 [2024-11-29 14:24:56.256697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.256756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.705 [2024-11-29 14:24:56.256829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:14.705 [2024-11-29 14:24:56.256853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.705 [2024-11-29 14:24:56.256874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.256928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.705 [2024-11-29 14:24:56.256954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:14.705 [2024-11-29 14:24:56.256974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.705 [2024-11-29 14:24:56.256994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.267649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.705 [2024-11-29 14:24:56.267806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:14.705 [2024-11-29 14:24:56.267857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.705 [2024-11-29 14:24:56.267881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.276344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.705 [2024-11-29 14:24:56.276505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:14.705 [2024-11-29 14:24:56.276559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.705 [2024-11-29 14:24:56.276587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.276645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.705 [2024-11-29 14:24:56.276671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:14.705 [2024-11-29 14:24:56.276692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.705 [2024-11-29 14:24:56.276717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.276759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.705 [2024-11-29 14:24:56.276840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:14.705 [2024-11-29 14:24:56.276866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.705 [2024-11-29 14:24:56.276887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.276973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.705 [2024-11-29 14:24:56.277008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:14.705 [2024-11-29 14:24:56.277019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.705 [2024-11-29 14:24:56.277032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.277071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.705 [2024-11-29 14:24:56.277084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:14.705 [2024-11-29 14:24:56.277095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.705 [2024-11-29 14:24:56.277107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.277146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.705 [2024-11-29 14:24:56.277158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:14.705 [2024-11-29 14:24:56.277166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.705 [2024-11-29 14:24:56.277181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.277232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:14.705 [2024-11-29 14:24:56.277244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:14.705 [2024-11-29 14:24:56.277252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:14.705 [2024-11-29 14:24:56.277260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.705 [2024-11-29 14:24:56.277401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.451 ms, result 0 00:17:14.968 14:24:56 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:14.968 14:24:56 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:14.968 [2024-11-29 14:24:56.568765] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:14.968 [2024-11-29 14:24:56.568906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85750 ] 00:17:14.968 [2024-11-29 14:24:56.719827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.229 [2024-11-29 14:24:56.771968] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.229 [2024-11-29 14:24:56.884716] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:15.229 [2024-11-29 14:24:56.884797] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:15.492 [2024-11-29 14:24:57.046141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.492 [2024-11-29 14:24:57.046211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:15.492 [2024-11-29 14:24:57.046226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:15.492 [2024-11-29 14:24:57.046236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.492 [2024-11-29 14:24:57.048853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.492 [2024-11-29 14:24:57.048908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:15.492 [2024-11-29 14:24:57.048923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:17:15.492 [2024-11-29 14:24:57.048931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.492 [2024-11-29 14:24:57.049044] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:15.492 [2024-11-29 14:24:57.049313] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:15.492 [2024-11-29 14:24:57.049332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.492 [2024-11-29 14:24:57.049343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:15.492 [2024-11-29 14:24:57.049356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:17:15.492 [2024-11-29 14:24:57.049368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.492 [2024-11-29 14:24:57.051260] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:15.492 [2024-11-29 14:24:57.055402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.492 [2024-11-29 14:24:57.055460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:15.492 [2024-11-29 14:24:57.055473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.144 ms 00:17:15.492 [2024-11-29 14:24:57.055484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.492 [2024-11-29 14:24:57.055617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.492 [2024-11-29 14:24:57.055631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:15.492 [2024-11-29 14:24:57.055647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:15.492 [2024-11-29 14:24:57.055656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.492 [2024-11-29 14:24:57.064235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.492 [2024-11-29 14:24:57.064282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:15.492 [2024-11-29 14:24:57.064294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.532 ms 00:17:15.492 [2024-11-29 14:24:57.064302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.492 [2024-11-29 14:24:57.064447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.492 [2024-11-29 14:24:57.064459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:15.492 [2024-11-29 14:24:57.064470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:15.492 [2024-11-29 14:24:57.064479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.492 [2024-11-29 14:24:57.064537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.492 [2024-11-29 14:24:57.064550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:15.492 [2024-11-29 14:24:57.064565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:15.492 [2024-11-29 14:24:57.064572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.492 [2024-11-29 14:24:57.064595] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:15.492 [2024-11-29 14:24:57.066808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.492 [2024-11-29 14:24:57.066850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:15.492 [2024-11-29 14:24:57.066869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:17:15.492 [2024-11-29 14:24:57.066879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.492 [2024-11-29 14:24:57.066942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.492 [2024-11-29 14:24:57.066956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:15.492 [2024-11-29 14:24:57.066970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:15.492 [2024-11-29 14:24:57.066979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.492 [2024-11-29 14:24:57.067000] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:15.492 [2024-11-29 14:24:57.067028] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:15.492 [2024-11-29 14:24:57.067066] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:15.492 [2024-11-29 14:24:57.067083] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:15.492 [2024-11-29 14:24:57.067195] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:15.492 [2024-11-29 14:24:57.067209] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:15.492 [2024-11-29 14:24:57.067223] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:15.492 [2024-11-29 14:24:57.067235] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:15.492 [2024-11-29 14:24:57.067247] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:15.492 [2024-11-29 14:24:57.067257] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:15.492 [2024-11-29 14:24:57.067265] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:15.492 [2024-11-29 14:24:57.067273] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:15.492 [2024-11-29 14:24:57.067282] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:15.492 [2024-11-29 14:24:57.067293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.492 [2024-11-29 14:24:57.067303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:15.492 [2024-11-29 14:24:57.067313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:15.492 [2024-11-29 14:24:57.067323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.492 [2024-11-29 14:24:57.067412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.492 [2024-11-29 14:24:57.067423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:15.492 [2024-11-29 14:24:57.067431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:15.492 [2024-11-29 14:24:57.067442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.492 [2024-11-29 14:24:57.067571] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:15.492 [2024-11-29 14:24:57.067591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:15.492 [2024-11-29 14:24:57.067600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:15.492 [2024-11-29 14:24:57.067614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.492 [2024-11-29 14:24:57.067625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:15.492 [2024-11-29 14:24:57.067633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:15.492 [2024-11-29 14:24:57.067640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:15.492 [2024-11-29 14:24:57.067649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:15.492 [2024-11-29 14:24:57.067660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:15.492 [2024-11-29 14:24:57.067669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:15.492 [2024-11-29 14:24:57.067677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:15.493 [2024-11-29 14:24:57.067684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:15.493 [2024-11-29 14:24:57.067691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:15.493 [2024-11-29 14:24:57.067699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:15.493 [2024-11-29 14:24:57.067708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:15.493 [2024-11-29 14:24:57.067717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.493 [2024-11-29 14:24:57.067724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:15.493 [2024-11-29 14:24:57.067730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:15.493 [2024-11-29 14:24:57.067739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.493 [2024-11-29 14:24:57.067747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:15.493 [2024-11-29 14:24:57.067754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:15.493 [2024-11-29 14:24:57.067765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.493 [2024-11-29 14:24:57.067773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:15.493 [2024-11-29 14:24:57.067779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:15.493 [2024-11-29 14:24:57.067793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.493 [2024-11-29 14:24:57.067801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:15.493 [2024-11-29 14:24:57.067808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:15.493 [2024-11-29 14:24:57.067814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.493 [2024-11-29 14:24:57.067822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:15.493 [2024-11-29 14:24:57.067830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:15.493 [2024-11-29 14:24:57.067837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.493 [2024-11-29 14:24:57.067843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:15.493 [2024-11-29 14:24:57.067850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:15.493 [2024-11-29 14:24:57.067857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:15.493 [2024-11-29 14:24:57.067866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:15.493 [2024-11-29 14:24:57.067874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:15.493 [2024-11-29 14:24:57.067880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:15.493 [2024-11-29 14:24:57.067887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:15.493 [2024-11-29 14:24:57.067894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:15.493 [2024-11-29 14:24:57.067900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.493 [2024-11-29 14:24:57.067909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:15.493 [2024-11-29 14:24:57.067916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:15.493 [2024-11-29 14:24:57.067926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.493 [2024-11-29 14:24:57.067933] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:15.493 [2024-11-29 14:24:57.067944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:15.493 [2024-11-29 14:24:57.067951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:15.493 [2024-11-29 14:24:57.067965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.493 [2024-11-29 14:24:57.067973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:15.493 [2024-11-29 14:24:57.067980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:15.493 [2024-11-29 14:24:57.067987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:15.493 [2024-11-29 14:24:57.067994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:15.493 [2024-11-29 14:24:57.068002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:15.493 [2024-11-29 14:24:57.068009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:15.493 [2024-11-29 14:24:57.068020] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:15.493 [2024-11-29 14:24:57.068031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:15.493 [2024-11-29 14:24:57.068040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:15.493 [2024-11-29 14:24:57.068051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:15.493 [2024-11-29 14:24:57.068059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:15.493 [2024-11-29 14:24:57.068066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:15.493 [2024-11-29 14:24:57.068073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:15.493 [2024-11-29 14:24:57.068081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:15.493 [2024-11-29 14:24:57.068089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:15.493 [2024-11-29 14:24:57.068097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:15.493 [2024-11-29 14:24:57.068103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:15.493 [2024-11-29 14:24:57.068111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:15.493 [2024-11-29 14:24:57.068118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:15.493 [2024-11-29 14:24:57.068125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:15.493 [2024-11-29 14:24:57.068134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:15.493 [2024-11-29 14:24:57.068142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:15.493 [2024-11-29 14:24:57.068150] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:15.493 [2024-11-29 14:24:57.068159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:15.493 [2024-11-29 14:24:57.068170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:15.493 [2024-11-29 14:24:57.068181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:15.493 [2024-11-29 14:24:57.068188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:15.493 [2024-11-29 14:24:57.068196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:15.493 [2024-11-29 14:24:57.068203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.493 [2024-11-29 14:24:57.068214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:15.493 [2024-11-29 14:24:57.068225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:17:15.493 [2024-11-29 14:24:57.068233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.493 [2024-11-29 14:24:57.092599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.493 [2024-11-29 14:24:57.092656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:15.493 [2024-11-29 14:24:57.092672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.312 ms 00:17:15.493 [2024-11-29 14:24:57.092682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.493 [2024-11-29 14:24:57.092836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.493 [2024-11-29 14:24:57.092850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:15.493 [2024-11-29 14:24:57.092861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:15.493 [2024-11-29 14:24:57.092875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.493 [2024-11-29 14:24:57.105417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.493 [2024-11-29 14:24:57.105470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:15.493 [2024-11-29 14:24:57.105482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.511 ms 00:17:15.493 [2024-11-29 14:24:57.105532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.493 [2024-11-29 14:24:57.105615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.493 [2024-11-29 14:24:57.105628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:15.493 [2024-11-29 14:24:57.105643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:15.493 [2024-11-29 14:24:57.105659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.493 [2024-11-29 14:24:57.106241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.493 [2024-11-29 14:24:57.106287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:15.493 [2024-11-29 14:24:57.106300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:17:15.493 [2024-11-29 14:24:57.106310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.493 [2024-11-29 14:24:57.106488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.493 [2024-11-29 14:24:57.106520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:15.493 [2024-11-29 14:24:57.106531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:17:15.493 [2024-11-29 14:24:57.106544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.493 [2024-11-29 14:24:57.114446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.493 [2024-11-29 14:24:57.114707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:15.493 [2024-11-29 14:24:57.114727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.877 ms 00:17:15.493 [2024-11-29 14:24:57.114736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.493 [2024-11-29 14:24:57.118824] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:15.493 [2024-11-29 14:24:57.118885] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:15.493 [2024-11-29 14:24:57.118897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.493 [2024-11-29 14:24:57.118934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:15.494 [2024-11-29 14:24:57.118946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.043 ms 00:17:15.494 [2024-11-29 14:24:57.118953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.494 [2024-11-29 14:24:57.135105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.494 [2024-11-29 14:24:57.135154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:15.494 [2024-11-29 14:24:57.135168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.067 ms 00:17:15.494 [2024-11-29 14:24:57.135176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.494 [2024-11-29 14:24:57.138408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.494 [2024-11-29 14:24:57.138640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:15.494 [2024-11-29 14:24:57.138661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.121 ms 00:17:15.494 [2024-11-29 14:24:57.138669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.494 [2024-11-29 14:24:57.141622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.494 [2024-11-29 14:24:57.141673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:15.494 [2024-11-29 14:24:57.141693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:17:15.494 [2024-11-29 14:24:57.141700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.494 [2024-11-29 14:24:57.142054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.494 [2024-11-29 14:24:57.142068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:15.494 [2024-11-29 14:24:57.142082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:17:15.494 [2024-11-29 14:24:57.142096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.494 [2024-11-29 14:24:57.166777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.494 [2024-11-29 14:24:57.166837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:15.494 [2024-11-29 14:24:57.166851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.657 ms 00:17:15.494 [2024-11-29 14:24:57.166860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.494 [2024-11-29 14:24:57.175169] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:15.494 [2024-11-29 14:24:57.194894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.494 [2024-11-29 14:24:57.194958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:15.494 [2024-11-29 14:24:57.194971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.914 ms 00:17:15.494 [2024-11-29 14:24:57.194980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.494 [2024-11-29 14:24:57.195086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.494 [2024-11-29 14:24:57.195098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:15.494 [2024-11-29 14:24:57.195112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:15.494 [2024-11-29 14:24:57.195125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.494 [2024-11-29 14:24:57.195189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.494 [2024-11-29 14:24:57.195205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:15.494 [2024-11-29 14:24:57.195214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:15.494 [2024-11-29 14:24:57.195222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.494 [2024-11-29 14:24:57.195245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.494 [2024-11-29 14:24:57.195254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:15.494 [2024-11-29 14:24:57.195263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:15.494 [2024-11-29 14:24:57.195272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.494 [2024-11-29 14:24:57.195312] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:15.494 [2024-11-29 14:24:57.195324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.494 [2024-11-29 14:24:57.195333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:15.494 [2024-11-29 14:24:57.195344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:15.494 [2024-11-29 14:24:57.195358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.494 [2024-11-29 14:24:57.201578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.494 [2024-11-29 14:24:57.201625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:15.494 [2024-11-29 14:24:57.201638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.195 ms 00:17:15.494 [2024-11-29 14:24:57.201646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.494 [2024-11-29 14:24:57.201742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.494 [2024-11-29 14:24:57.201756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:15.494 [2024-11-29 14:24:57.201766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:15.494 [2024-11-29 14:24:57.201775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.494 [2024-11-29 14:24:57.202777] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:15.494 [2024-11-29 14:24:57.204178] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.303 ms, result 0 00:17:15.494 [2024-11-29 14:24:57.205444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:15.494 [2024-11-29 14:24:57.212814] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:16.436  [2024-11-29T14:24:59.617Z] Copying: 13/256 [MB] (13 MBps) [2024-11-29T14:25:00.560Z] Copying: 30/256 [MB] (16 MBps) [2024-11-29T14:25:01.530Z] Copying: 50/256 [MB] (20 MBps) [2024-11-29T14:25:02.474Z] Copying: 73/256 [MB] (22 MBps) [2024-11-29T14:25:03.417Z] Copying: 89/256 [MB] (16 MBps) [2024-11-29T14:25:04.361Z] Copying: 108/256 [MB] (19 MBps) [2024-11-29T14:25:05.304Z] Copying: 127/256 [MB] (18 MBps) [2024-11-29T14:25:06.248Z] Copying: 142/256 [MB] (14 MBps) [2024-11-29T14:25:07.635Z] Copying: 159/256 [MB] (17 MBps) [2024-11-29T14:25:08.576Z] Copying: 174/256 [MB] (14 MBps) [2024-11-29T14:25:09.520Z] Copying: 195/256 [MB] (20 MBps) [2024-11-29T14:25:10.465Z] Copying: 211/256 [MB] (16 MBps) [2024-11-29T14:25:11.411Z] Copying: 230/256 [MB] (19 MBps) [2024-11-29T14:25:11.411Z] Copying: 252/256 [MB] (22 MBps) [2024-11-29T14:25:11.411Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-29 14:25:11.371845] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.617 [2024-11-29 14:25:11.373697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.617 [2024-11-29 14:25:11.373746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:29.617 [2024-11-29 14:25:11.373765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:29.617 [2024-11-29 14:25:11.373775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.617 [2024-11-29 14:25:11.373798] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:29.617 [2024-11-29 14:25:11.374443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.617 [2024-11-29 14:25:11.374468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:29.617 [2024-11-29 14:25:11.374479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:17:29.617 [2024-11-29 14:25:11.374513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.617 [2024-11-29 14:25:11.374780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.617 [2024-11-29 14:25:11.374794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:29.617 [2024-11-29 14:25:11.374805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:17:29.617 [2024-11-29 14:25:11.374814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.617 [2024-11-29 14:25:11.378555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.617 [2024-11-29 14:25:11.378584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:29.617 [2024-11-29 14:25:11.378600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.713 ms 00:17:29.617 [2024-11-29 14:25:11.378613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.617 [2024-11-29 14:25:11.385613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.617 [2024-11-29 14:25:11.385655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:29.617 [2024-11-29 14:25:11.385667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.967 ms 00:17:29.617 [2024-11-29 14:25:11.385675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.617 [2024-11-29 14:25:11.388847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.617 [2024-11-29 14:25:11.388897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:29.617 [2024-11-29 14:25:11.388908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:17:29.617 [2024-11-29 14:25:11.388927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.617 [2024-11-29 14:25:11.394385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.617 [2024-11-29 14:25:11.394438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:29.617 [2024-11-29 14:25:11.394457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.399 ms 00:17:29.617 [2024-11-29 14:25:11.394466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.617 [2024-11-29 14:25:11.394620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.617 [2024-11-29 14:25:11.394633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:29.617 [2024-11-29 14:25:11.394643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:29.617 [2024-11-29 14:25:11.394652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.617 [2024-11-29 14:25:11.397626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.617 [2024-11-29 14:25:11.397673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:29.617 [2024-11-29 14:25:11.397684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.956 ms 00:17:29.617 [2024-11-29 14:25:11.397691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.617 [2024-11-29 14:25:11.400561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.617 [2024-11-29 14:25:11.400607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:29.617 [2024-11-29 14:25:11.400617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.827 ms 00:17:29.617 [2024-11-29 14:25:11.400626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.617 [2024-11-29 14:25:11.403041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.617 [2024-11-29 14:25:11.403088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:29.617 [2024-11-29 14:25:11.403099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.371 ms 00:17:29.617 [2024-11-29 14:25:11.403108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.617 [2024-11-29 14:25:11.405377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.617 [2024-11-29 14:25:11.405434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:29.617 [2024-11-29 14:25:11.405445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.194 ms 00:17:29.617 [2024-11-29 14:25:11.405453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.617 [2024-11-29 14:25:11.405516] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:29.617 [2024-11-29 14:25:11.405540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:29.617 [2024-11-29 14:25:11.405551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:29.617 [2024-11-29 14:25:11.405560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:29.617 [2024-11-29 14:25:11.405568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:29.617 [2024-11-29 14:25:11.405578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:29.617 [2024-11-29 14:25:11.405586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.405998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:29.618 [2024-11-29 14:25:11.406303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:29.619 [2024-11-29 14:25:11.406311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:29.619 [2024-11-29 14:25:11.406318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:29.619 [2024-11-29 14:25:11.406325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:29.619 [2024-11-29 14:25:11.406334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:29.619 [2024-11-29 14:25:11.406351] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:29.619 [2024-11-29 14:25:11.406358] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: df3abf56-433e-4b26-bd7a-fcb295efe551 00:17:29.619 [2024-11-29 14:25:11.406378] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:29.619 [2024-11-29 14:25:11.406387] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:29.619 [2024-11-29 14:25:11.406395] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:29.619 [2024-11-29 14:25:11.406403] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:29.619 [2024-11-29 14:25:11.406410] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:29.619 [2024-11-29 14:25:11.406419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:29.619 [2024-11-29 14:25:11.406427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:29.619 [2024-11-29 14:25:11.406434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:29.619 [2024-11-29 14:25:11.406441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:29.619 [2024-11-29 14:25:11.406448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.619 [2024-11-29 14:25:11.406455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:29.619 [2024-11-29 14:25:11.406468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:17:29.619 [2024-11-29 14:25:11.406475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.881 [2024-11-29 14:25:11.408680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.881 [2024-11-29 14:25:11.408712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:29.881 [2024-11-29 14:25:11.408723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.174 ms 00:17:29.881 [2024-11-29 14:25:11.408733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.881 [2024-11-29 14:25:11.408838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.881 [2024-11-29 14:25:11.408854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:29.881 [2024-11-29 14:25:11.408863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:29.881 [2024-11-29 14:25:11.408877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.881 [2024-11-29 14:25:11.415939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.881 [2024-11-29 14:25:11.416153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.881 [2024-11-29 14:25:11.416183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.881 [2024-11-29 14:25:11.416191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.881 [2024-11-29 14:25:11.416276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.881 [2024-11-29 14:25:11.416289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.881 [2024-11-29 14:25:11.416298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.881 [2024-11-29 14:25:11.416311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.881 [2024-11-29 14:25:11.416364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.881 [2024-11-29 14:25:11.416375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.881 [2024-11-29 14:25:11.416387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.881 [2024-11-29 14:25:11.416397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.881 [2024-11-29 14:25:11.416414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.881 [2024-11-29 14:25:11.416423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.881 [2024-11-29 14:25:11.416434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.881 [2024-11-29 14:25:11.416441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.881 [2024-11-29 14:25:11.429109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.881 [2024-11-29 14:25:11.429162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.881 [2024-11-29 14:25:11.429173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.881 [2024-11-29 14:25:11.429181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.881 [2024-11-29 14:25:11.439038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.881 [2024-11-29 14:25:11.439266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.881 [2024-11-29 14:25:11.439283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.881 [2024-11-29 14:25:11.439293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.881 [2024-11-29 14:25:11.439341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.881 [2024-11-29 14:25:11.439352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.881 [2024-11-29 14:25:11.439361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.881 [2024-11-29 14:25:11.439369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.881 [2024-11-29 14:25:11.439400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.881 [2024-11-29 14:25:11.439410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.881 [2024-11-29 14:25:11.439427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.881 [2024-11-29 14:25:11.439439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.881 [2024-11-29 14:25:11.439547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.882 [2024-11-29 14:25:11.439560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.882 [2024-11-29 14:25:11.439569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.882 [2024-11-29 14:25:11.439577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.882 [2024-11-29 14:25:11.439610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.882 [2024-11-29 14:25:11.439621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:29.882 [2024-11-29 14:25:11.439630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.882 [2024-11-29 14:25:11.439639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.882 [2024-11-29 14:25:11.439682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.882 [2024-11-29 14:25:11.439693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.882 [2024-11-29 14:25:11.439703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.882 [2024-11-29 14:25:11.439712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.882 [2024-11-29 14:25:11.439760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.882 [2024-11-29 14:25:11.439772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.882 [2024-11-29 14:25:11.439782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.882 [2024-11-29 14:25:11.439796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.882 [2024-11-29 14:25:11.439946] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.218 ms, result 0 00:17:29.882 00:17:29.882 00:17:30.143 14:25:11 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:30.143 14:25:11 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:30.716 14:25:12 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:30.716 [2024-11-29 14:25:12.312306] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:30.716 [2024-11-29 14:25:12.312706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85915 ] 00:17:30.716 [2024-11-29 14:25:12.464251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.977 [2024-11-29 14:25:12.513783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.977 [2024-11-29 14:25:12.628081] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:30.977 [2024-11-29 14:25:12.628163] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.239 [2024-11-29 14:25:12.788253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.239 [2024-11-29 14:25:12.788313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:31.239 [2024-11-29 14:25:12.788332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:31.239 [2024-11-29 14:25:12.788341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.240 [2024-11-29 14:25:12.790942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.240 [2024-11-29 14:25:12.790989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.240 [2024-11-29 14:25:12.791003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:17:31.240 [2024-11-29 14:25:12.791011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.240 [2024-11-29 14:25:12.791118] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:31.240 [2024-11-29 14:25:12.791387] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:31.240 [2024-11-29 14:25:12.791406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.240 [2024-11-29 14:25:12.791415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.240 [2024-11-29 14:25:12.791428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:17:31.240 [2024-11-29 14:25:12.791438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.240 [2024-11-29 14:25:12.793227] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:31.240 [2024-11-29 14:25:12.796883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.240 [2024-11-29 14:25:12.796937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:31.240 [2024-11-29 14:25:12.796949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.658 ms 00:17:31.240 [2024-11-29 14:25:12.796960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.240 [2024-11-29 14:25:12.797040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.240 [2024-11-29 14:25:12.797050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:31.240 [2024-11-29 14:25:12.797059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:31.240 [2024-11-29 14:25:12.797066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.240 [2024-11-29 14:25:12.805444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.240 [2024-11-29 14:25:12.805485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.240 [2024-11-29 14:25:12.805525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.339 ms 00:17:31.240 [2024-11-29 14:25:12.805537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.240 [2024-11-29 14:25:12.805679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.240 [2024-11-29 14:25:12.805691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.240 [2024-11-29 14:25:12.805701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:31.240 [2024-11-29 14:25:12.805709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.240 [2024-11-29 14:25:12.805737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.240 [2024-11-29 14:25:12.805749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:31.240 [2024-11-29 14:25:12.805759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:31.240 [2024-11-29 14:25:12.805767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.240 [2024-11-29 14:25:12.805790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:31.240 [2024-11-29 14:25:12.807894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.240 [2024-11-29 14:25:12.807931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.240 [2024-11-29 14:25:12.807951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.110 ms 00:17:31.240 [2024-11-29 14:25:12.807961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.240 [2024-11-29 14:25:12.808008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.240 [2024-11-29 14:25:12.808021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:31.240 [2024-11-29 14:25:12.808032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:31.240 [2024-11-29 14:25:12.808040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.240 [2024-11-29 14:25:12.808060] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:31.240 [2024-11-29 14:25:12.808079] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:31.240 [2024-11-29 14:25:12.808116] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:31.240 [2024-11-29 14:25:12.808134] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:31.240 [2024-11-29 14:25:12.808242] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:31.240 [2024-11-29 14:25:12.808255] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:31.240 [2024-11-29 14:25:12.808266] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:31.240 [2024-11-29 14:25:12.808279] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:31.240 [2024-11-29 14:25:12.808290] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:31.240 [2024-11-29 14:25:12.808298] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:31.240 [2024-11-29 14:25:12.808306] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:31.240 [2024-11-29 14:25:12.808317] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:31.240 [2024-11-29 14:25:12.808325] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:31.240 [2024-11-29 14:25:12.808337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.240 [2024-11-29 14:25:12.808350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:31.240 [2024-11-29 14:25:12.808361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:17:31.240 [2024-11-29 14:25:12.808371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.240 [2024-11-29 14:25:12.808461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.240 [2024-11-29 14:25:12.808472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:31.240 [2024-11-29 14:25:12.808481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:31.240 [2024-11-29 14:25:12.808516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.240 [2024-11-29 14:25:12.808621] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:31.240 [2024-11-29 14:25:12.808640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:31.240 [2024-11-29 14:25:12.808649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.240 [2024-11-29 14:25:12.808661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.240 [2024-11-29 14:25:12.808671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:31.240 [2024-11-29 14:25:12.808679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:31.240 [2024-11-29 14:25:12.808686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:31.240 [2024-11-29 14:25:12.808694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:31.240 [2024-11-29 14:25:12.808704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:31.240 [2024-11-29 14:25:12.808713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.240 [2024-11-29 14:25:12.808721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:31.240 [2024-11-29 14:25:12.808728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:31.240 [2024-11-29 14:25:12.808734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.240 [2024-11-29 14:25:12.808741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:31.240 [2024-11-29 14:25:12.808749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:31.240 [2024-11-29 14:25:12.808760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.240 [2024-11-29 14:25:12.808769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:31.240 [2024-11-29 14:25:12.808775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:31.240 [2024-11-29 14:25:12.808782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.240 [2024-11-29 14:25:12.808789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:31.240 [2024-11-29 14:25:12.808796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:31.240 [2024-11-29 14:25:12.808804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.240 [2024-11-29 14:25:12.808812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:31.240 [2024-11-29 14:25:12.808818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:31.240 [2024-11-29 14:25:12.808832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.240 [2024-11-29 14:25:12.808839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:31.240 [2024-11-29 14:25:12.808846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:31.240 [2024-11-29 14:25:12.808854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.240 [2024-11-29 14:25:12.808861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:31.240 [2024-11-29 14:25:12.808868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:31.240 [2024-11-29 14:25:12.808875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.240 [2024-11-29 14:25:12.808882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:31.240 [2024-11-29 14:25:12.808889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:31.240 [2024-11-29 14:25:12.808895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.240 [2024-11-29 14:25:12.808902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:31.240 [2024-11-29 14:25:12.808910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:31.240 [2024-11-29 14:25:12.808917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.240 [2024-11-29 14:25:12.808924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:31.240 [2024-11-29 14:25:12.808931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:31.240 [2024-11-29 14:25:12.808938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.240 [2024-11-29 14:25:12.808947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:31.240 [2024-11-29 14:25:12.808956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:31.240 [2024-11-29 14:25:12.808963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.240 [2024-11-29 14:25:12.808969] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:31.240 [2024-11-29 14:25:12.808977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:31.241 [2024-11-29 14:25:12.808984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.241 [2024-11-29 14:25:12.808995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.241 [2024-11-29 14:25:12.809006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:31.241 [2024-11-29 14:25:12.809015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:31.241 [2024-11-29 14:25:12.809021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:31.241 [2024-11-29 14:25:12.809028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:31.241 [2024-11-29 14:25:12.809035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:31.241 [2024-11-29 14:25:12.809042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:31.241 [2024-11-29 14:25:12.809051] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:31.241 [2024-11-29 14:25:12.809062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.241 [2024-11-29 14:25:12.809074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:31.241 [2024-11-29 14:25:12.809084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:31.241 [2024-11-29 14:25:12.809091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:31.241 [2024-11-29 14:25:12.809101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:31.241 [2024-11-29 14:25:12.809109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:31.241 [2024-11-29 14:25:12.809116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:31.241 [2024-11-29 14:25:12.809124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:31.241 [2024-11-29 14:25:12.809131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:31.241 [2024-11-29 14:25:12.809138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:31.241 [2024-11-29 14:25:12.809145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:31.241 [2024-11-29 14:25:12.809153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:31.241 [2024-11-29 14:25:12.809160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:31.241 [2024-11-29 14:25:12.809167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:31.241 [2024-11-29 14:25:12.809174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:31.241 [2024-11-29 14:25:12.809181] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:31.241 [2024-11-29 14:25:12.809189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.241 [2024-11-29 14:25:12.809197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:31.241 [2024-11-29 14:25:12.809208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:31.241 [2024-11-29 14:25:12.809215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:31.241 [2024-11-29 14:25:12.809223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:31.241 [2024-11-29 14:25:12.809231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.809243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:31.241 [2024-11-29 14:25:12.809252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:17:31.241 [2024-11-29 14:25:12.809259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.833040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.833110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.241 [2024-11-29 14:25:12.833132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.725 ms 00:17:31.241 [2024-11-29 14:25:12.833147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.833386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.833408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:31.241 [2024-11-29 14:25:12.833424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:17:31.241 [2024-11-29 14:25:12.833445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.845585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.845623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.241 [2024-11-29 14:25:12.845633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.104 ms 00:17:31.241 [2024-11-29 14:25:12.845641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.845706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.845716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.241 [2024-11-29 14:25:12.845727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:31.241 [2024-11-29 14:25:12.845734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.846127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.846146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.241 [2024-11-29 14:25:12.846156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:17:31.241 [2024-11-29 14:25:12.846164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.846310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.846326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.241 [2024-11-29 14:25:12.846336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:17:31.241 [2024-11-29 14:25:12.846347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.852543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.852581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.241 [2024-11-29 14:25:12.852595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.172 ms 00:17:31.241 [2024-11-29 14:25:12.852602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.855690] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:31.241 [2024-11-29 14:25:12.855859] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:31.241 [2024-11-29 14:25:12.855874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.855883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:31.241 [2024-11-29 14:25:12.855892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.194 ms 00:17:31.241 [2024-11-29 14:25:12.855900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.871227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.871265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:31.241 [2024-11-29 14:25:12.871282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.263 ms 00:17:31.241 [2024-11-29 14:25:12.871293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.873325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.873359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:31.241 [2024-11-29 14:25:12.873368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.959 ms 00:17:31.241 [2024-11-29 14:25:12.873375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.875219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.875254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:31.241 [2024-11-29 14:25:12.875270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.804 ms 00:17:31.241 [2024-11-29 14:25:12.875277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.875647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.875662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:31.241 [2024-11-29 14:25:12.875671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:17:31.241 [2024-11-29 14:25:12.875682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.894380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.894583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:31.241 [2024-11-29 14:25:12.894602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.674 ms 00:17:31.241 [2024-11-29 14:25:12.894611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.902338] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:31.241 [2024-11-29 14:25:12.918709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.918750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:31.241 [2024-11-29 14:25:12.918762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.952 ms 00:17:31.241 [2024-11-29 14:25:12.918770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.918855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.241 [2024-11-29 14:25:12.918866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:31.241 [2024-11-29 14:25:12.918876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:31.241 [2024-11-29 14:25:12.918884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.241 [2024-11-29 14:25:12.918952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.242 [2024-11-29 14:25:12.918963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:31.242 [2024-11-29 14:25:12.918973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:31.242 [2024-11-29 14:25:12.918981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.242 [2024-11-29 14:25:12.919007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.242 [2024-11-29 14:25:12.919016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:31.242 [2024-11-29 14:25:12.919025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:31.242 [2024-11-29 14:25:12.919033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.242 [2024-11-29 14:25:12.919067] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:31.242 [2024-11-29 14:25:12.919080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.242 [2024-11-29 14:25:12.919088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:31.242 [2024-11-29 14:25:12.919096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:31.242 [2024-11-29 14:25:12.919104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.242 [2024-11-29 14:25:12.923973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.242 [2024-11-29 14:25:12.924012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:31.242 [2024-11-29 14:25:12.924023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.847 ms 00:17:31.242 [2024-11-29 14:25:12.924031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.242 [2024-11-29 14:25:12.924114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.242 [2024-11-29 14:25:12.924127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:31.242 [2024-11-29 14:25:12.924135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:31.242 [2024-11-29 14:25:12.924148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.242 [2024-11-29 14:25:12.925069] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:31.242 [2024-11-29 14:25:12.926214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.492 ms, result 0 00:17:31.242 [2024-11-29 14:25:12.927231] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:31.242 [2024-11-29 14:25:12.935411] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:31.815  [2024-11-29T14:25:13.609Z] Copying: 4096/4096 [kB] (average 9941 kBps)[2024-11-29 14:25:13.348623] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:31.815 [2024-11-29 14:25:13.349749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.815 [2024-11-29 14:25:13.349792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:31.815 [2024-11-29 14:25:13.349814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:31.815 [2024-11-29 14:25:13.349823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.816 [2024-11-29 14:25:13.349844] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:31.816 [2024-11-29 14:25:13.350534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.816 [2024-11-29 14:25:13.350576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:31.816 [2024-11-29 14:25:13.350588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:17:31.816 [2024-11-29 14:25:13.350598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.816 [2024-11-29 14:25:13.353521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.816 [2024-11-29 14:25:13.353565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:31.816 [2024-11-29 14:25:13.353576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.896 ms 00:17:31.816 [2024-11-29 14:25:13.353585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.816 [2024-11-29 14:25:13.358097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.816 [2024-11-29 14:25:13.358275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:31.816 [2024-11-29 14:25:13.358292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.487 ms 00:17:31.816 [2024-11-29 14:25:13.358302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.816 [2024-11-29 14:25:13.365382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.816 [2024-11-29 14:25:13.365588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:31.816 [2024-11-29 14:25:13.365608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.962 ms 00:17:31.816 [2024-11-29 14:25:13.365617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.816 [2024-11-29 14:25:13.368561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.816 [2024-11-29 14:25:13.368600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:31.816 [2024-11-29 14:25:13.368610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.872 ms 00:17:31.816 [2024-11-29 14:25:13.368631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.816 [2024-11-29 14:25:13.373794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.816 [2024-11-29 14:25:13.373842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:31.816 [2024-11-29 14:25:13.373861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.092 ms 00:17:31.816 [2024-11-29 14:25:13.373869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.816 [2024-11-29 14:25:13.374002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.816 [2024-11-29 14:25:13.374013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:31.816 [2024-11-29 14:25:13.374023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:31.816 [2024-11-29 14:25:13.374032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.816 [2024-11-29 14:25:13.377465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.816 [2024-11-29 14:25:13.377529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:31.816 [2024-11-29 14:25:13.377540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.414 ms 00:17:31.816 [2024-11-29 14:25:13.377548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.816 [2024-11-29 14:25:13.380432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.816 [2024-11-29 14:25:13.380619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:31.816 [2024-11-29 14:25:13.380649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.839 ms 00:17:31.816 [2024-11-29 14:25:13.380657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.816 [2024-11-29 14:25:13.382768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.816 [2024-11-29 14:25:13.382814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:31.816 [2024-11-29 14:25:13.382823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.036 ms 00:17:31.816 [2024-11-29 14:25:13.382831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.816 [2024-11-29 14:25:13.385297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.816 [2024-11-29 14:25:13.385343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:31.816 [2024-11-29 14:25:13.385353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.391 ms 00:17:31.816 [2024-11-29 14:25:13.385360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.816 [2024-11-29 14:25:13.385402] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:31.816 [2024-11-29 14:25:13.385426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:31.816 [2024-11-29 14:25:13.385846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.385995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:31.817 [2024-11-29 14:25:13.386237] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:31.817 [2024-11-29 14:25:13.386245] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: df3abf56-433e-4b26-bd7a-fcb295efe551 00:17:31.817 [2024-11-29 14:25:13.386263] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:31.817 [2024-11-29 14:25:13.386270] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:31.817 [2024-11-29 14:25:13.386277] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:31.817 [2024-11-29 14:25:13.386286] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:31.817 [2024-11-29 14:25:13.386293] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:31.817 [2024-11-29 14:25:13.386301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:31.817 [2024-11-29 14:25:13.386308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:31.817 [2024-11-29 14:25:13.386315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:31.817 [2024-11-29 14:25:13.386324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:31.817 [2024-11-29 14:25:13.386331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.817 [2024-11-29 14:25:13.386339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:31.817 [2024-11-29 14:25:13.386352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:17:31.817 [2024-11-29 14:25:13.386360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.817 [2024-11-29 14:25:13.388293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.817 [2024-11-29 14:25:13.388452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:31.817 [2024-11-29 14:25:13.388468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.900 ms 00:17:31.817 [2024-11-29 14:25:13.388478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.817 [2024-11-29 14:25:13.388636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.817 [2024-11-29 14:25:13.388650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:31.817 [2024-11-29 14:25:13.388668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:17:31.817 [2024-11-29 14:25:13.388677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.817 [2024-11-29 14:25:13.396779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.817 [2024-11-29 14:25:13.396826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.817 [2024-11-29 14:25:13.396837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.817 [2024-11-29 14:25:13.396844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.817 [2024-11-29 14:25:13.396921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.817 [2024-11-29 14:25:13.396938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.817 [2024-11-29 14:25:13.396946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.817 [2024-11-29 14:25:13.396953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.817 [2024-11-29 14:25:13.396996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.817 [2024-11-29 14:25:13.397006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.817 [2024-11-29 14:25:13.397014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.817 [2024-11-29 14:25:13.397022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.817 [2024-11-29 14:25:13.397039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.817 [2024-11-29 14:25:13.397048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.817 [2024-11-29 14:25:13.397059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.817 [2024-11-29 14:25:13.397066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.817 [2024-11-29 14:25:13.410690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.817 [2024-11-29 14:25:13.410885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.817 [2024-11-29 14:25:13.410904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.817 [2024-11-29 14:25:13.410925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.817 [2024-11-29 14:25:13.421326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.817 [2024-11-29 14:25:13.421382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.817 [2024-11-29 14:25:13.421394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.817 [2024-11-29 14:25:13.421402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.817 [2024-11-29 14:25:13.421448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.817 [2024-11-29 14:25:13.421457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.817 [2024-11-29 14:25:13.421466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.817 [2024-11-29 14:25:13.421475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.817 [2024-11-29 14:25:13.421537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.817 [2024-11-29 14:25:13.421547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.818 [2024-11-29 14:25:13.421556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.818 [2024-11-29 14:25:13.421568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.818 [2024-11-29 14:25:13.421640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.818 [2024-11-29 14:25:13.421673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.818 [2024-11-29 14:25:13.421682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.818 [2024-11-29 14:25:13.421690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.818 [2024-11-29 14:25:13.421721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.818 [2024-11-29 14:25:13.421731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:31.818 [2024-11-29 14:25:13.421739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.818 [2024-11-29 14:25:13.421752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.818 [2024-11-29 14:25:13.421796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.818 [2024-11-29 14:25:13.421805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.818 [2024-11-29 14:25:13.421815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.818 [2024-11-29 14:25:13.421823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.818 [2024-11-29 14:25:13.421872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.818 [2024-11-29 14:25:13.421883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.818 [2024-11-29 14:25:13.421892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.818 [2024-11-29 14:25:13.421902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.818 [2024-11-29 14:25:13.422057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.296 ms, result 0 00:17:32.078 00:17:32.078 00:17:32.078 14:25:13 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85933 00:17:32.078 14:25:13 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85933 00:17:32.078 14:25:13 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:32.078 14:25:13 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85933 ']' 00:17:32.078 14:25:13 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.078 14:25:13 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.078 14:25:13 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.078 14:25:13 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.078 14:25:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:32.078 [2024-11-29 14:25:13.741316] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:32.078 [2024-11-29 14:25:13.741692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85933 ] 00:17:32.338 [2024-11-29 14:25:13.895016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.338 [2024-11-29 14:25:13.948579] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.911 14:25:14 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:32.911 14:25:14 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:32.911 14:25:14 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:33.172 [2024-11-29 14:25:14.800809] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:33.172 [2024-11-29 14:25:14.800885] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:33.433 [2024-11-29 14:25:14.977639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.433 [2024-11-29 14:25:14.977887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:33.433 [2024-11-29 14:25:14.977912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:33.433 [2024-11-29 14:25:14.977923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.433 [2024-11-29 14:25:14.980472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.433 [2024-11-29 14:25:14.980546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:33.433 [2024-11-29 14:25:14.980557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.523 ms 00:17:33.433 [2024-11-29 14:25:14.980568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.433 [2024-11-29 14:25:14.980667] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:33.433 [2024-11-29 14:25:14.980946] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:33.433 [2024-11-29 14:25:14.980963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.433 [2024-11-29 14:25:14.980976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:33.433 [2024-11-29 14:25:14.980986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:17:33.433 [2024-11-29 14:25:14.980998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.433 [2024-11-29 14:25:14.982752] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:33.433 [2024-11-29 14:25:14.986554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.433 [2024-11-29 14:25:14.986603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:33.433 [2024-11-29 14:25:14.986617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.799 ms 00:17:33.433 [2024-11-29 14:25:14.986626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.433 [2024-11-29 14:25:14.986706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.433 [2024-11-29 14:25:14.986715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:33.433 [2024-11-29 14:25:14.986729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:33.434 [2024-11-29 14:25:14.986737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:14.994816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:14.994857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:33.434 [2024-11-29 14:25:14.994870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.021 ms 00:17:33.434 [2024-11-29 14:25:14.994879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:14.995014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:14.995025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:33.434 [2024-11-29 14:25:14.995037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:33.434 [2024-11-29 14:25:14.995045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:14.995076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:14.995087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:33.434 [2024-11-29 14:25:14.995097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:33.434 [2024-11-29 14:25:14.995108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:14.995133] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:33.434 [2024-11-29 14:25:14.997301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:14.997486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:33.434 [2024-11-29 14:25:14.997519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.175 ms 00:17:33.434 [2024-11-29 14:25:14.997530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:14.997577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:14.997588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:33.434 [2024-11-29 14:25:14.997598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:33.434 [2024-11-29 14:25:14.997609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:14.997630] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:33.434 [2024-11-29 14:25:14.997654] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:33.434 [2024-11-29 14:25:14.997700] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:33.434 [2024-11-29 14:25:14.997721] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:33.434 [2024-11-29 14:25:14.997830] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:33.434 [2024-11-29 14:25:14.997848] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:33.434 [2024-11-29 14:25:14.997862] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:33.434 [2024-11-29 14:25:14.997879] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:33.434 [2024-11-29 14:25:14.997889] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:33.434 [2024-11-29 14:25:14.997903] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:33.434 [2024-11-29 14:25:14.997911] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:33.434 [2024-11-29 14:25:14.997920] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:33.434 [2024-11-29 14:25:14.997933] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:33.434 [2024-11-29 14:25:14.997944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:14.997953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:33.434 [2024-11-29 14:25:14.997963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:17:33.434 [2024-11-29 14:25:14.997972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:14.998062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:14.998073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:33.434 [2024-11-29 14:25:14.998089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:33.434 [2024-11-29 14:25:14.998097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:14.998204] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:33.434 [2024-11-29 14:25:14.998216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:33.434 [2024-11-29 14:25:14.998232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:33.434 [2024-11-29 14:25:14.998241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.434 [2024-11-29 14:25:14.998257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:33.434 [2024-11-29 14:25:14.998265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:33.434 [2024-11-29 14:25:14.998276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:33.434 [2024-11-29 14:25:14.998284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:33.434 [2024-11-29 14:25:14.998303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:33.434 [2024-11-29 14:25:14.998311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:33.434 [2024-11-29 14:25:14.998322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:33.434 [2024-11-29 14:25:14.998332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:33.434 [2024-11-29 14:25:14.998344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:33.434 [2024-11-29 14:25:14.998352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:33.434 [2024-11-29 14:25:14.998362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:33.434 [2024-11-29 14:25:14.998370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.434 [2024-11-29 14:25:14.998381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:33.434 [2024-11-29 14:25:14.998389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:33.434 [2024-11-29 14:25:14.998398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.434 [2024-11-29 14:25:14.998405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:33.434 [2024-11-29 14:25:14.998419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:33.434 [2024-11-29 14:25:14.998427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:33.434 [2024-11-29 14:25:14.998436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:33.434 [2024-11-29 14:25:14.998444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:33.434 [2024-11-29 14:25:14.998454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:33.434 [2024-11-29 14:25:14.998463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:33.434 [2024-11-29 14:25:14.998473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:33.434 [2024-11-29 14:25:14.998480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:33.434 [2024-11-29 14:25:14.998511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:33.434 [2024-11-29 14:25:14.998520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:33.434 [2024-11-29 14:25:14.998528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:33.434 [2024-11-29 14:25:14.998535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:33.434 [2024-11-29 14:25:14.998545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:33.434 [2024-11-29 14:25:14.998552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:33.434 [2024-11-29 14:25:14.998561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:33.434 [2024-11-29 14:25:14.998567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:33.434 [2024-11-29 14:25:14.998580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:33.434 [2024-11-29 14:25:14.998588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:33.434 [2024-11-29 14:25:14.998597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:33.434 [2024-11-29 14:25:14.998604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.434 [2024-11-29 14:25:14.998612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:33.434 [2024-11-29 14:25:14.998620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:33.434 [2024-11-29 14:25:14.998629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.434 [2024-11-29 14:25:14.998637] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:33.434 [2024-11-29 14:25:14.998648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:33.434 [2024-11-29 14:25:14.998656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:33.434 [2024-11-29 14:25:14.998666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.434 [2024-11-29 14:25:14.998675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:33.434 [2024-11-29 14:25:14.998683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:33.434 [2024-11-29 14:25:14.998690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:33.434 [2024-11-29 14:25:14.998699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:33.434 [2024-11-29 14:25:14.998706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:33.434 [2024-11-29 14:25:14.998718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:33.434 [2024-11-29 14:25:14.998727] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:33.434 [2024-11-29 14:25:14.998739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:33.434 [2024-11-29 14:25:14.998751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:33.434 [2024-11-29 14:25:14.998763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:33.434 [2024-11-29 14:25:14.998770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:33.434 [2024-11-29 14:25:14.998779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:33.434 [2024-11-29 14:25:14.998786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:33.434 [2024-11-29 14:25:14.998797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:33.434 [2024-11-29 14:25:14.998804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:33.434 [2024-11-29 14:25:14.998813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:33.434 [2024-11-29 14:25:14.998820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:33.434 [2024-11-29 14:25:14.998829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:33.434 [2024-11-29 14:25:14.998839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:33.434 [2024-11-29 14:25:14.998848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:33.434 [2024-11-29 14:25:14.998858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:33.434 [2024-11-29 14:25:14.998869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:33.434 [2024-11-29 14:25:14.998877] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:33.434 [2024-11-29 14:25:14.998887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:33.434 [2024-11-29 14:25:14.998897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:33.434 [2024-11-29 14:25:14.998907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:33.434 [2024-11-29 14:25:14.998929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:33.434 [2024-11-29 14:25:14.998939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:33.434 [2024-11-29 14:25:14.998946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:14.998957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:33.434 [2024-11-29 14:25:14.998966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:17:33.434 [2024-11-29 14:25:14.998977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:15.013398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:15.013624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:33.434 [2024-11-29 14:25:15.013646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.341 ms 00:17:33.434 [2024-11-29 14:25:15.013658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:15.013789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:15.013806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:33.434 [2024-11-29 14:25:15.013820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:33.434 [2024-11-29 14:25:15.013829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:15.026425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:15.026483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:33.434 [2024-11-29 14:25:15.026530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.574 ms 00:17:33.434 [2024-11-29 14:25:15.026541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:15.026609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:15.026626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:33.434 [2024-11-29 14:25:15.026637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:33.434 [2024-11-29 14:25:15.026648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:15.027243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:15.027288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:33.434 [2024-11-29 14:25:15.027300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:17:33.434 [2024-11-29 14:25:15.027311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:15.027470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:15.027486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:33.434 [2024-11-29 14:25:15.027517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:17:33.434 [2024-11-29 14:25:15.027528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.434 [2024-11-29 14:25:15.052709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.434 [2024-11-29 14:25:15.052766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:33.434 [2024-11-29 14:25:15.052786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.153 ms 00:17:33.435 [2024-11-29 14:25:15.052797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.435 [2024-11-29 14:25:15.056764] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:33.435 [2024-11-29 14:25:15.056821] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:33.435 [2024-11-29 14:25:15.056839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.435 [2024-11-29 14:25:15.056851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:33.435 [2024-11-29 14:25:15.056861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.903 ms 00:17:33.435 [2024-11-29 14:25:15.056872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.435 [2024-11-29 14:25:15.072785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.435 [2024-11-29 14:25:15.072842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:33.435 [2024-11-29 14:25:15.072855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.832 ms 00:17:33.435 [2024-11-29 14:25:15.072869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.435 [2024-11-29 14:25:15.076033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.435 [2024-11-29 14:25:15.076084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:33.435 [2024-11-29 14:25:15.076095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.046 ms 00:17:33.435 [2024-11-29 14:25:15.076105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.435 [2024-11-29 14:25:15.078662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.435 [2024-11-29 14:25:15.078712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:33.435 [2024-11-29 14:25:15.078723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.505 ms 00:17:33.435 [2024-11-29 14:25:15.078732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.435 [2024-11-29 14:25:15.079125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.435 [2024-11-29 14:25:15.079144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:33.435 [2024-11-29 14:25:15.079154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:17:33.435 [2024-11-29 14:25:15.079164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.435 [2024-11-29 14:25:15.103698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.435 [2024-11-29 14:25:15.103759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:33.435 [2024-11-29 14:25:15.103772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.511 ms 00:17:33.435 [2024-11-29 14:25:15.103785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.435 [2024-11-29 14:25:15.112032] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:33.435 [2024-11-29 14:25:15.130630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.435 [2024-11-29 14:25:15.130680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:33.435 [2024-11-29 14:25:15.130695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.750 ms 00:17:33.435 [2024-11-29 14:25:15.130704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.435 [2024-11-29 14:25:15.130802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.435 [2024-11-29 14:25:15.130813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:33.435 [2024-11-29 14:25:15.130827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:33.435 [2024-11-29 14:25:15.130839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.435 [2024-11-29 14:25:15.130898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.435 [2024-11-29 14:25:15.130926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:33.435 [2024-11-29 14:25:15.130940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:33.435 [2024-11-29 14:25:15.130948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.435 [2024-11-29 14:25:15.130977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.435 [2024-11-29 14:25:15.130987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:33.435 [2024-11-29 14:25:15.131002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:33.435 [2024-11-29 14:25:15.131011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.435 [2024-11-29 14:25:15.131054] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:33.435 [2024-11-29 14:25:15.131064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.435 [2024-11-29 14:25:15.131073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:33.435 [2024-11-29 14:25:15.131082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:33.435 [2024-11-29 14:25:15.131092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.435 [2024-11-29 14:25:15.137237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.435 [2024-11-29 14:25:15.137302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:33.435 [2024-11-29 14:25:15.137313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.122 ms 00:17:33.435 [2024-11-29 14:25:15.137324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.435 [2024-11-29 14:25:15.137427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.435 [2024-11-29 14:25:15.137439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:33.435 [2024-11-29 14:25:15.137448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:33.435 [2024-11-29 14:25:15.137457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.435 [2024-11-29 14:25:15.138564] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:33.435 [2024-11-29 14:25:15.139928] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 160.565 ms, result 0 00:17:33.435 [2024-11-29 14:25:15.142063] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:33.435 Some configs were skipped because the RPC state that can call them passed over. 00:17:33.435 14:25:15 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:33.695 [2024-11-29 14:25:15.375615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.695 [2024-11-29 14:25:15.375807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:33.695 [2024-11-29 14:25:15.375878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.061 ms 00:17:33.695 [2024-11-29 14:25:15.375905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.695 [2024-11-29 14:25:15.375963] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.420 ms, result 0 00:17:33.695 true 00:17:33.695 14:25:15 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:33.956 [2024-11-29 14:25:15.591733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.956 [2024-11-29 14:25:15.591914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:33.956 [2024-11-29 14:25:15.591935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.926 ms 00:17:33.956 [2024-11-29 14:25:15.591945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.956 [2024-11-29 14:25:15.591988] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.180 ms, result 0 00:17:33.956 true 00:17:33.956 14:25:15 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85933 00:17:33.956 14:25:15 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85933 ']' 00:17:33.956 14:25:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85933 00:17:33.956 14:25:15 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:33.957 14:25:15 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:33.957 14:25:15 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85933 00:17:33.957 killing process with pid 85933 00:17:33.957 14:25:15 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:33.957 14:25:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:33.957 14:25:15 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85933' 00:17:33.957 14:25:15 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85933 00:17:33.957 14:25:15 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85933 00:17:34.220 [2024-11-29 14:25:15.768881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.220 [2024-11-29 14:25:15.768941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:34.220 [2024-11-29 14:25:15.768957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:34.221 [2024-11-29 14:25:15.768969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.221 [2024-11-29 14:25:15.768997] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:34.221 [2024-11-29 14:25:15.769742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.221 [2024-11-29 14:25:15.769800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:34.221 [2024-11-29 14:25:15.769824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:17:34.221 [2024-11-29 14:25:15.769846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.221 [2024-11-29 14:25:15.770172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.221 [2024-11-29 14:25:15.770204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:34.221 [2024-11-29 14:25:15.770227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:17:34.221 [2024-11-29 14:25:15.770249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.221 [2024-11-29 14:25:15.774870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.221 [2024-11-29 14:25:15.775021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:34.221 [2024-11-29 14:25:15.775080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.590 ms 00:17:34.221 [2024-11-29 14:25:15.775108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.221 [2024-11-29 14:25:15.782652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.221 [2024-11-29 14:25:15.782780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:34.221 [2024-11-29 14:25:15.782833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.493 ms 00:17:34.221 [2024-11-29 14:25:15.782860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.221 [2024-11-29 14:25:15.785534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.221 [2024-11-29 14:25:15.785665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:34.221 [2024-11-29 14:25:15.785718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.570 ms 00:17:34.221 [2024-11-29 14:25:15.785731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.221 [2024-11-29 14:25:15.790803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.221 [2024-11-29 14:25:15.790860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:34.221 [2024-11-29 14:25:15.790871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.033 ms 00:17:34.221 [2024-11-29 14:25:15.790881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.221 [2024-11-29 14:25:15.791043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.221 [2024-11-29 14:25:15.791058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:34.221 [2024-11-29 14:25:15.791071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:34.221 [2024-11-29 14:25:15.791081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.221 [2024-11-29 14:25:15.794391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.221 [2024-11-29 14:25:15.794442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:34.221 [2024-11-29 14:25:15.794452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.291 ms 00:17:34.221 [2024-11-29 14:25:15.794463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.221 [2024-11-29 14:25:15.797080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.221 [2024-11-29 14:25:15.797126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:34.221 [2024-11-29 14:25:15.797136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.554 ms 00:17:34.221 [2024-11-29 14:25:15.797145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.221 [2024-11-29 14:25:15.799622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.221 [2024-11-29 14:25:15.799787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:34.221 [2024-11-29 14:25:15.799804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.435 ms 00:17:34.221 [2024-11-29 14:25:15.799814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.221 [2024-11-29 14:25:15.802028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.221 [2024-11-29 14:25:15.802079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:34.221 [2024-11-29 14:25:15.802089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.146 ms 00:17:34.221 [2024-11-29 14:25:15.802098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.221 [2024-11-29 14:25:15.802140] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:34.221 [2024-11-29 14:25:15.802158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:34.221 [2024-11-29 14:25:15.802668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.802992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:34.222 [2024-11-29 14:25:15.803151] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:34.222 [2024-11-29 14:25:15.803162] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: df3abf56-433e-4b26-bd7a-fcb295efe551 00:17:34.222 [2024-11-29 14:25:15.803173] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:34.222 [2024-11-29 14:25:15.803180] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:34.222 [2024-11-29 14:25:15.803189] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:34.222 [2024-11-29 14:25:15.803201] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:34.222 [2024-11-29 14:25:15.803211] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:34.222 [2024-11-29 14:25:15.803218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:34.222 [2024-11-29 14:25:15.803236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:34.222 [2024-11-29 14:25:15.803243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:34.222 [2024-11-29 14:25:15.803251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:34.222 [2024-11-29 14:25:15.803258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.222 [2024-11-29 14:25:15.803270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:34.222 [2024-11-29 14:25:15.803281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:17:34.222 [2024-11-29 14:25:15.803309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.222 [2024-11-29 14:25:15.805713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.222 [2024-11-29 14:25:15.805848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:34.222 [2024-11-29 14:25:15.805907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.354 ms 00:17:34.222 [2024-11-29 14:25:15.805936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.222 [2024-11-29 14:25:15.806068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.222 [2024-11-29 14:25:15.806098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:34.222 [2024-11-29 14:25:15.806120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:34.222 [2024-11-29 14:25:15.806143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.222 [2024-11-29 14:25:15.813568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.222 [2024-11-29 14:25:15.813719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:34.222 [2024-11-29 14:25:15.813774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.222 [2024-11-29 14:25:15.813802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.222 [2024-11-29 14:25:15.813910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.222 [2024-11-29 14:25:15.813939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:34.222 [2024-11-29 14:25:15.813961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.222 [2024-11-29 14:25:15.813987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.222 [2024-11-29 14:25:15.814050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.222 [2024-11-29 14:25:15.814176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:34.222 [2024-11-29 14:25:15.814198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.222 [2024-11-29 14:25:15.814220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.222 [2024-11-29 14:25:15.814252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.222 [2024-11-29 14:25:15.814276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:34.222 [2024-11-29 14:25:15.814297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.222 [2024-11-29 14:25:15.814375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.222 [2024-11-29 14:25:15.827523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.222 [2024-11-29 14:25:15.827696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:34.222 [2024-11-29 14:25:15.827750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.222 [2024-11-29 14:25:15.827777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.222 [2024-11-29 14:25:15.837441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.222 [2024-11-29 14:25:15.837631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:34.222 [2024-11-29 14:25:15.837688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.222 [2024-11-29 14:25:15.837720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.222 [2024-11-29 14:25:15.837783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.222 [2024-11-29 14:25:15.837811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.222 [2024-11-29 14:25:15.837833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.222 [2024-11-29 14:25:15.837859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.222 [2024-11-29 14:25:15.837919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.223 [2024-11-29 14:25:15.838030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.223 [2024-11-29 14:25:15.838059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.223 [2024-11-29 14:25:15.838081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.223 [2024-11-29 14:25:15.838183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.223 [2024-11-29 14:25:15.838213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.223 [2024-11-29 14:25:15.838237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.223 [2024-11-29 14:25:15.838251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.223 [2024-11-29 14:25:15.838286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.223 [2024-11-29 14:25:15.838300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:34.223 [2024-11-29 14:25:15.838308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.223 [2024-11-29 14:25:15.838327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.223 [2024-11-29 14:25:15.838368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.223 [2024-11-29 14:25:15.838379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.223 [2024-11-29 14:25:15.838388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.223 [2024-11-29 14:25:15.838398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.223 [2024-11-29 14:25:15.838448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.223 [2024-11-29 14:25:15.838463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.223 [2024-11-29 14:25:15.838471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.223 [2024-11-29 14:25:15.838483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.223 [2024-11-29 14:25:15.839054] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.142 ms, result 0 00:17:34.484 14:25:16 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:34.484 [2024-11-29 14:25:16.150568] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:34.484 [2024-11-29 14:25:16.150702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85972 ] 00:17:34.745 [2024-11-29 14:25:16.303348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.745 [2024-11-29 14:25:16.353298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.745 [2024-11-29 14:25:16.467200] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:34.745 [2024-11-29 14:25:16.467542] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:35.007 [2024-11-29 14:25:16.627654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.007 [2024-11-29 14:25:16.627712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:35.007 [2024-11-29 14:25:16.627732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:35.007 [2024-11-29 14:25:16.627745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.007 [2024-11-29 14:25:16.630254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.007 [2024-11-29 14:25:16.630305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.007 [2024-11-29 14:25:16.630319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.488 ms 00:17:35.007 [2024-11-29 14:25:16.630327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.007 [2024-11-29 14:25:16.630421] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:35.008 [2024-11-29 14:25:16.630731] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:35.008 [2024-11-29 14:25:16.630750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.008 [2024-11-29 14:25:16.630763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.008 [2024-11-29 14:25:16.630775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:17:35.008 [2024-11-29 14:25:16.630783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-11-29 14:25:16.632589] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:35.008 [2024-11-29 14:25:16.636339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.008 [2024-11-29 14:25:16.636394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:35.008 [2024-11-29 14:25:16.636405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.752 ms 00:17:35.008 [2024-11-29 14:25:16.636416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-11-29 14:25:16.636511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.008 [2024-11-29 14:25:16.636522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:35.008 [2024-11-29 14:25:16.636531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:35.008 [2024-11-29 14:25:16.636539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-11-29 14:25:16.644402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.008 [2024-11-29 14:25:16.644445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.008 [2024-11-29 14:25:16.644459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.798 ms 00:17:35.008 [2024-11-29 14:25:16.644468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-11-29 14:25:16.644630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.008 [2024-11-29 14:25:16.644644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.008 [2024-11-29 14:25:16.644657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:35.008 [2024-11-29 14:25:16.644665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-11-29 14:25:16.644692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.008 [2024-11-29 14:25:16.644706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:35.008 [2024-11-29 14:25:16.644714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:35.008 [2024-11-29 14:25:16.644722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-11-29 14:25:16.644742] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:35.008 [2024-11-29 14:25:16.646790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.008 [2024-11-29 14:25:16.646826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.008 [2024-11-29 14:25:16.646837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:17:35.008 [2024-11-29 14:25:16.646844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-11-29 14:25:16.646887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.008 [2024-11-29 14:25:16.646901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:35.008 [2024-11-29 14:25:16.646927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:35.008 [2024-11-29 14:25:16.646940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-11-29 14:25:16.646963] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:35.008 [2024-11-29 14:25:16.646984] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:35.008 [2024-11-29 14:25:16.647021] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:35.008 [2024-11-29 14:25:16.647037] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:35.008 [2024-11-29 14:25:16.647150] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:35.008 [2024-11-29 14:25:16.647166] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:35.008 [2024-11-29 14:25:16.647179] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:35.008 [2024-11-29 14:25:16.647189] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:35.008 [2024-11-29 14:25:16.647198] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:35.008 [2024-11-29 14:25:16.647208] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:35.008 [2024-11-29 14:25:16.647220] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:35.008 [2024-11-29 14:25:16.647228] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:35.008 [2024-11-29 14:25:16.647241] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:35.008 [2024-11-29 14:25:16.647251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.008 [2024-11-29 14:25:16.647261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:35.008 [2024-11-29 14:25:16.647272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:35.008 [2024-11-29 14:25:16.647281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-11-29 14:25:16.647368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.008 [2024-11-29 14:25:16.647382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:35.008 [2024-11-29 14:25:16.647391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:35.008 [2024-11-29 14:25:16.647400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.008 [2024-11-29 14:25:16.647525] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:35.008 [2024-11-29 14:25:16.647545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:35.008 [2024-11-29 14:25:16.647555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:35.008 [2024-11-29 14:25:16.647567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.008 [2024-11-29 14:25:16.647578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:35.008 [2024-11-29 14:25:16.647587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:35.008 [2024-11-29 14:25:16.647595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:35.008 [2024-11-29 14:25:16.647606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:35.008 [2024-11-29 14:25:16.647618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:35.008 [2024-11-29 14:25:16.647626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:35.008 [2024-11-29 14:25:16.647636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:35.008 [2024-11-29 14:25:16.647645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:35.008 [2024-11-29 14:25:16.647653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:35.008 [2024-11-29 14:25:16.647661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:35.008 [2024-11-29 14:25:16.647669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:35.008 [2024-11-29 14:25:16.647679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.008 [2024-11-29 14:25:16.647687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:35.008 [2024-11-29 14:25:16.647696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:35.008 [2024-11-29 14:25:16.647704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.008 [2024-11-29 14:25:16.647714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:35.008 [2024-11-29 14:25:16.647723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:35.008 [2024-11-29 14:25:16.647734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.008 [2024-11-29 14:25:16.647743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:35.008 [2024-11-29 14:25:16.647753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:35.008 [2024-11-29 14:25:16.647766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.008 [2024-11-29 14:25:16.647774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:35.008 [2024-11-29 14:25:16.647782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:35.008 [2024-11-29 14:25:16.647791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.008 [2024-11-29 14:25:16.647800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:35.008 [2024-11-29 14:25:16.647808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:35.008 [2024-11-29 14:25:16.647815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.008 [2024-11-29 14:25:16.647822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:35.008 [2024-11-29 14:25:16.647829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:35.008 [2024-11-29 14:25:16.647836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:35.008 [2024-11-29 14:25:16.647843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:35.008 [2024-11-29 14:25:16.647849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:35.008 [2024-11-29 14:25:16.647856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:35.008 [2024-11-29 14:25:16.647863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:35.008 [2024-11-29 14:25:16.647871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:35.008 [2024-11-29 14:25:16.647878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.008 [2024-11-29 14:25:16.647886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:35.008 [2024-11-29 14:25:16.647895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:35.009 [2024-11-29 14:25:16.647904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.009 [2024-11-29 14:25:16.647911] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:35.009 [2024-11-29 14:25:16.647918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:35.009 [2024-11-29 14:25:16.647926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:35.009 [2024-11-29 14:25:16.647935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.009 [2024-11-29 14:25:16.647942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:35.009 [2024-11-29 14:25:16.647949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:35.009 [2024-11-29 14:25:16.647955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:35.009 [2024-11-29 14:25:16.647963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:35.009 [2024-11-29 14:25:16.647971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:35.009 [2024-11-29 14:25:16.647978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:35.009 [2024-11-29 14:25:16.647988] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:35.009 [2024-11-29 14:25:16.647998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:35.009 [2024-11-29 14:25:16.648008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:35.009 [2024-11-29 14:25:16.648019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:35.009 [2024-11-29 14:25:16.648026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:35.009 [2024-11-29 14:25:16.648034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:35.009 [2024-11-29 14:25:16.648041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:35.009 [2024-11-29 14:25:16.648049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:35.009 [2024-11-29 14:25:16.648057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:35.009 [2024-11-29 14:25:16.648064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:35.009 [2024-11-29 14:25:16.648071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:35.009 [2024-11-29 14:25:16.648078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:35.009 [2024-11-29 14:25:16.648087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:35.009 [2024-11-29 14:25:16.648095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:35.009 [2024-11-29 14:25:16.648102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:35.009 [2024-11-29 14:25:16.648110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:35.009 [2024-11-29 14:25:16.648117] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:35.009 [2024-11-29 14:25:16.648125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:35.009 [2024-11-29 14:25:16.648135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:35.009 [2024-11-29 14:25:16.648144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:35.009 [2024-11-29 14:25:16.648152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:35.009 [2024-11-29 14:25:16.648159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:35.009 [2024-11-29 14:25:16.648168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.648179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:35.009 [2024-11-29 14:25:16.648190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:17:35.009 [2024-11-29 14:25:16.648197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.676682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.676930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.009 [2024-11-29 14:25:16.677175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.430 ms 00:17:35.009 [2024-11-29 14:25:16.677219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.677451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.677535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:35.009 [2024-11-29 14:25:16.677640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:35.009 [2024-11-29 14:25:16.677675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.689639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.689804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.009 [2024-11-29 14:25:16.689866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.916 ms 00:17:35.009 [2024-11-29 14:25:16.689892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.689993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.690026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.009 [2024-11-29 14:25:16.690054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:35.009 [2024-11-29 14:25:16.690076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.690710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.690847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.009 [2024-11-29 14:25:16.690909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:17:35.009 [2024-11-29 14:25:16.690973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.691176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.691212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.009 [2024-11-29 14:25:16.691239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:17:35.009 [2024-11-29 14:25:16.691265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.698540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.698692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.009 [2024-11-29 14:25:16.698744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.233 ms 00:17:35.009 [2024-11-29 14:25:16.698767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.702609] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:35.009 [2024-11-29 14:25:16.702786] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:35.009 [2024-11-29 14:25:16.702849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.702870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:35.009 [2024-11-29 14:25:16.702890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.964 ms 00:17:35.009 [2024-11-29 14:25:16.702935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.718984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.719166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:35.009 [2024-11-29 14:25:16.719224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.962 ms 00:17:35.009 [2024-11-29 14:25:16.719249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.722269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.722427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:35.009 [2024-11-29 14:25:16.722485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.893 ms 00:17:35.009 [2024-11-29 14:25:16.722540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.725182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.725337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:35.009 [2024-11-29 14:25:16.725404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:17:35.009 [2024-11-29 14:25:16.725427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.725834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.725957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:35.009 [2024-11-29 14:25:16.726027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:17:35.009 [2024-11-29 14:25:16.726050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.751093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.751264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:35.009 [2024-11-29 14:25:16.751322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.991 ms 00:17:35.009 [2024-11-29 14:25:16.751346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.759599] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:35.009 [2024-11-29 14:25:16.778749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.778927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:35.009 [2024-11-29 14:25:16.778986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.217 ms 00:17:35.009 [2024-11-29 14:25:16.779011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.779129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.779162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:35.009 [2024-11-29 14:25:16.779184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:35.009 [2024-11-29 14:25:16.779208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.009 [2024-11-29 14:25:16.779284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.009 [2024-11-29 14:25:16.779392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:35.010 [2024-11-29 14:25:16.779424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:35.010 [2024-11-29 14:25:16.779445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.010 [2024-11-29 14:25:16.779486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.010 [2024-11-29 14:25:16.779545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:35.010 [2024-11-29 14:25:16.779574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:35.010 [2024-11-29 14:25:16.779598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.010 [2024-11-29 14:25:16.779712] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:35.010 [2024-11-29 14:25:16.779748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.010 [2024-11-29 14:25:16.779768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:35.010 [2024-11-29 14:25:16.779789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:35.010 [2024-11-29 14:25:16.779810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.010 [2024-11-29 14:25:16.785606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.010 [2024-11-29 14:25:16.785765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:35.010 [2024-11-29 14:25:16.785822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.760 ms 00:17:35.010 [2024-11-29 14:25:16.785846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.010 [2024-11-29 14:25:16.785946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.010 [2024-11-29 14:25:16.785978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:35.010 [2024-11-29 14:25:16.785999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:35.010 [2024-11-29 14:25:16.786018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.010 [2024-11-29 14:25:16.787153] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:35.010 [2024-11-29 14:25:16.788759] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 159.149 ms, result 0 00:17:35.010 [2024-11-29 14:25:16.790086] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:35.271 [2024-11-29 14:25:16.797448] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:36.213  [2024-11-29T14:25:18.948Z] Copying: 14/256 [MB] (14 MBps) [2024-11-29T14:25:19.891Z] Copying: 25/256 [MB] (10 MBps) [2024-11-29T14:25:21.277Z] Copying: 35/256 [MB] (10 MBps) [2024-11-29T14:25:22.222Z] Copying: 57/256 [MB] (21 MBps) [2024-11-29T14:25:23.167Z] Copying: 75/256 [MB] (18 MBps) [2024-11-29T14:25:24.112Z] Copying: 85/256 [MB] (10 MBps) [2024-11-29T14:25:25.056Z] Copying: 97760/262144 [kB] (10132 kBps) [2024-11-29T14:25:26.012Z] Copying: 116/256 [MB] (21 MBps) [2024-11-29T14:25:26.955Z] Copying: 132/256 [MB] (15 MBps) [2024-11-29T14:25:27.902Z] Copying: 142/256 [MB] (10 MBps) [2024-11-29T14:25:28.884Z] Copying: 159/256 [MB] (16 MBps) [2024-11-29T14:25:30.275Z] Copying: 175/256 [MB] (15 MBps) [2024-11-29T14:25:31.219Z] Copying: 192/256 [MB] (16 MBps) [2024-11-29T14:25:32.166Z] Copying: 211/256 [MB] (18 MBps) [2024-11-29T14:25:33.111Z] Copying: 225/256 [MB] (13 MBps) [2024-11-29T14:25:34.058Z] Copying: 243/256 [MB] (18 MBps) [2024-11-29T14:25:34.058Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-29 14:25:33.757845] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:52.264 [2024-11-29 14:25:33.761120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.264 [2024-11-29 14:25:33.761187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:52.264 [2024-11-29 14:25:33.761222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:52.264 [2024-11-29 14:25:33.761239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.264 [2024-11-29 14:25:33.761281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:52.264 [2024-11-29 14:25:33.762115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.264 [2024-11-29 14:25:33.762184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:52.264 [2024-11-29 14:25:33.762205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:17:52.264 [2024-11-29 14:25:33.762223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.264 [2024-11-29 14:25:33.762785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.264 [2024-11-29 14:25:33.762818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:52.264 [2024-11-29 14:25:33.762835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:17:52.264 [2024-11-29 14:25:33.762851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.264 [2024-11-29 14:25:33.770967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.264 [2024-11-29 14:25:33.771019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:52.264 [2024-11-29 14:25:33.771038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.078 ms 00:17:52.264 [2024-11-29 14:25:33.771055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.264 [2024-11-29 14:25:33.778203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.264 [2024-11-29 14:25:33.778236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:52.264 [2024-11-29 14:25:33.778257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.087 ms 00:17:52.264 [2024-11-29 14:25:33.778266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.264 [2024-11-29 14:25:33.781237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.264 [2024-11-29 14:25:33.781279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:52.264 [2024-11-29 14:25:33.781290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.917 ms 00:17:52.264 [2024-11-29 14:25:33.781309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.264 [2024-11-29 14:25:33.786820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.264 [2024-11-29 14:25:33.786861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:52.264 [2024-11-29 14:25:33.786880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.448 ms 00:17:52.264 [2024-11-29 14:25:33.786889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.264 [2024-11-29 14:25:33.787034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.264 [2024-11-29 14:25:33.787045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:52.264 [2024-11-29 14:25:33.787054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:52.264 [2024-11-29 14:25:33.787062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.264 [2024-11-29 14:25:33.790395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.264 [2024-11-29 14:25:33.790438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:52.264 [2024-11-29 14:25:33.790449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:17:52.264 [2024-11-29 14:25:33.790456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.264 [2024-11-29 14:25:33.793337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.264 [2024-11-29 14:25:33.793379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:52.264 [2024-11-29 14:25:33.793388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.822 ms 00:17:52.264 [2024-11-29 14:25:33.793396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.264 [2024-11-29 14:25:33.795552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.264 [2024-11-29 14:25:33.795589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:52.264 [2024-11-29 14:25:33.795599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:17:52.264 [2024-11-29 14:25:33.795606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.264 [2024-11-29 14:25:33.797981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.264 [2024-11-29 14:25:33.798019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:52.264 [2024-11-29 14:25:33.798028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.298 ms 00:17:52.264 [2024-11-29 14:25:33.798035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.264 [2024-11-29 14:25:33.798076] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:52.264 [2024-11-29 14:25:33.798099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:52.264 [2024-11-29 14:25:33.798109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:52.264 [2024-11-29 14:25:33.798118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:52.264 [2024-11-29 14:25:33.798126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:52.265 [2024-11-29 14:25:33.798737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:52.266 [2024-11-29 14:25:33.798947] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:52.266 [2024-11-29 14:25:33.798956] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: df3abf56-433e-4b26-bd7a-fcb295efe551 00:17:52.266 [2024-11-29 14:25:33.798978] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:52.266 [2024-11-29 14:25:33.798986] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:52.266 [2024-11-29 14:25:33.798993] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:52.266 [2024-11-29 14:25:33.799001] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:52.266 [2024-11-29 14:25:33.799012] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:52.266 [2024-11-29 14:25:33.799021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:52.266 [2024-11-29 14:25:33.799029] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:52.266 [2024-11-29 14:25:33.799037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:52.266 [2024-11-29 14:25:33.799044] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:52.266 [2024-11-29 14:25:33.799059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.266 [2024-11-29 14:25:33.799069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:52.266 [2024-11-29 14:25:33.799082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:17:52.266 [2024-11-29 14:25:33.799090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.266 [2024-11-29 14:25:33.801324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.266 [2024-11-29 14:25:33.801360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:52.266 [2024-11-29 14:25:33.801370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.213 ms 00:17:52.266 [2024-11-29 14:25:33.801378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.266 [2024-11-29 14:25:33.801538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.266 [2024-11-29 14:25:33.801556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:52.266 [2024-11-29 14:25:33.801567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:17:52.266 [2024-11-29 14:25:33.801574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.266 [2024-11-29 14:25:33.808918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.266 [2024-11-29 14:25:33.808956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.266 [2024-11-29 14:25:33.808966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.266 [2024-11-29 14:25:33.808975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.266 [2024-11-29 14:25:33.809041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.266 [2024-11-29 14:25:33.809053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.266 [2024-11-29 14:25:33.809061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.266 [2024-11-29 14:25:33.809069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.266 [2024-11-29 14:25:33.809112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.266 [2024-11-29 14:25:33.809122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.266 [2024-11-29 14:25:33.809131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.266 [2024-11-29 14:25:33.809139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.266 [2024-11-29 14:25:33.809156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.266 [2024-11-29 14:25:33.809171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.266 [2024-11-29 14:25:33.809182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.266 [2024-11-29 14:25:33.809195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.266 [2024-11-29 14:25:33.822693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.266 [2024-11-29 14:25:33.822734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.266 [2024-11-29 14:25:33.822745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.266 [2024-11-29 14:25:33.822753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.266 [2024-11-29 14:25:33.832994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.266 [2024-11-29 14:25:33.833042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.266 [2024-11-29 14:25:33.833052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.266 [2024-11-29 14:25:33.833061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.266 [2024-11-29 14:25:33.833109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.266 [2024-11-29 14:25:33.833119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.266 [2024-11-29 14:25:33.833128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.266 [2024-11-29 14:25:33.833136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.266 [2024-11-29 14:25:33.833166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.266 [2024-11-29 14:25:33.833175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.266 [2024-11-29 14:25:33.833184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.266 [2024-11-29 14:25:33.833195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.266 [2024-11-29 14:25:33.833269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.267 [2024-11-29 14:25:33.833280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.267 [2024-11-29 14:25:33.833289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.267 [2024-11-29 14:25:33.833298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.267 [2024-11-29 14:25:33.833335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.267 [2024-11-29 14:25:33.833345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:52.267 [2024-11-29 14:25:33.833354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.267 [2024-11-29 14:25:33.833362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.267 [2024-11-29 14:25:33.833411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.267 [2024-11-29 14:25:33.833421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.267 [2024-11-29 14:25:33.833430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.267 [2024-11-29 14:25:33.833438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.267 [2024-11-29 14:25:33.833487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.267 [2024-11-29 14:25:33.833526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.267 [2024-11-29 14:25:33.833536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.267 [2024-11-29 14:25:33.833551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.267 [2024-11-29 14:25:33.833696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.564 ms, result 0 00:17:52.527 00:17:52.527 00:17:52.527 14:25:34 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:53.099 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:53.099 14:25:34 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:53.099 14:25:34 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:53.099 14:25:34 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:53.100 14:25:34 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:53.100 14:25:34 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:53.100 14:25:34 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:53.100 14:25:34 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85933 00:17:53.100 14:25:34 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85933 ']' 00:17:53.100 14:25:34 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85933 00:17:53.100 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85933) - No such process 00:17:53.100 Process with pid 85933 is not found 00:17:53.100 14:25:34 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 85933 is not found' 00:17:53.100 00:17:53.100 real 1m5.597s 00:17:53.100 user 1m26.727s 00:17:53.100 sys 0m5.451s 00:17:53.100 14:25:34 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:53.100 ************************************ 00:17:53.100 END TEST ftl_trim 00:17:53.100 14:25:34 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:53.100 ************************************ 00:17:53.100 14:25:34 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:53.100 14:25:34 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:53.100 14:25:34 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:53.100 14:25:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:53.100 ************************************ 00:17:53.100 START TEST ftl_restore 00:17:53.100 ************************************ 00:17:53.100 14:25:34 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:53.360 * Looking for test storage... 00:17:53.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:53.360 14:25:34 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:53.360 14:25:34 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:17:53.360 14:25:34 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:53.361 14:25:34 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.361 14:25:34 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:53.361 14:25:34 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.361 14:25:34 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:53.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.361 --rc genhtml_branch_coverage=1 00:17:53.361 --rc genhtml_function_coverage=1 00:17:53.361 --rc genhtml_legend=1 00:17:53.361 --rc geninfo_all_blocks=1 00:17:53.361 --rc geninfo_unexecuted_blocks=1 00:17:53.361 00:17:53.361 ' 00:17:53.361 14:25:34 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:53.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.361 --rc genhtml_branch_coverage=1 00:17:53.361 --rc genhtml_function_coverage=1 00:17:53.361 --rc genhtml_legend=1 00:17:53.361 --rc geninfo_all_blocks=1 00:17:53.361 --rc geninfo_unexecuted_blocks=1 00:17:53.361 00:17:53.361 ' 00:17:53.361 14:25:34 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:53.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.361 --rc genhtml_branch_coverage=1 00:17:53.361 --rc genhtml_function_coverage=1 00:17:53.361 --rc genhtml_legend=1 00:17:53.361 --rc geninfo_all_blocks=1 00:17:53.361 --rc geninfo_unexecuted_blocks=1 00:17:53.361 00:17:53.361 ' 00:17:53.361 14:25:34 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:53.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.361 --rc genhtml_branch_coverage=1 00:17:53.361 --rc genhtml_function_coverage=1 00:17:53.361 --rc genhtml_legend=1 00:17:53.361 --rc geninfo_all_blocks=1 00:17:53.361 --rc geninfo_unexecuted_blocks=1 00:17:53.361 00:17:53.361 ' 00:17:53.361 14:25:34 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:53.361 14:25:34 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:53.361 14:25:34 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.tNSMlsjtob 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=86241 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 86241 00:17:53.361 14:25:35 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 86241 ']' 00:17:53.361 14:25:35 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.361 14:25:35 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:53.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.361 14:25:35 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.361 14:25:35 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:53.361 14:25:35 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:53.361 14:25:35 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:53.361 [2024-11-29 14:25:35.105385] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:53.361 [2024-11-29 14:25:35.105565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86241 ] 00:17:53.621 [2024-11-29 14:25:35.255690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.621 [2024-11-29 14:25:35.306117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.191 14:25:35 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:54.191 14:25:35 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:17:54.191 14:25:35 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:54.191 14:25:35 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:54.191 14:25:35 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:54.191 14:25:35 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:54.191 14:25:35 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:54.191 14:25:35 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:54.762 14:25:36 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:54.762 14:25:36 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:54.762 14:25:36 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:54.762 14:25:36 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:54.762 14:25:36 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:54.762 14:25:36 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:54.762 14:25:36 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:54.762 14:25:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:54.762 14:25:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:54.762 { 00:17:54.762 "name": "nvme0n1", 00:17:54.762 "aliases": [ 00:17:54.762 "ab7bec1a-7a54-4e58-b715-27b9602c94c5" 00:17:54.762 ], 00:17:54.762 "product_name": "NVMe disk", 00:17:54.762 "block_size": 4096, 00:17:54.762 "num_blocks": 1310720, 00:17:54.762 "uuid": "ab7bec1a-7a54-4e58-b715-27b9602c94c5", 00:17:54.762 "numa_id": -1, 00:17:54.762 "assigned_rate_limits": { 00:17:54.762 "rw_ios_per_sec": 0, 00:17:54.762 "rw_mbytes_per_sec": 0, 00:17:54.762 "r_mbytes_per_sec": 0, 00:17:54.762 "w_mbytes_per_sec": 0 00:17:54.762 }, 00:17:54.762 "claimed": true, 00:17:54.762 "claim_type": "read_many_write_one", 00:17:54.762 "zoned": false, 00:17:54.762 "supported_io_types": { 00:17:54.762 "read": true, 00:17:54.762 "write": true, 00:17:54.762 "unmap": true, 00:17:54.762 "flush": true, 00:17:54.762 "reset": true, 00:17:54.762 "nvme_admin": true, 00:17:54.762 "nvme_io": true, 00:17:54.762 "nvme_io_md": false, 00:17:54.762 "write_zeroes": true, 00:17:54.762 "zcopy": false, 00:17:54.762 "get_zone_info": false, 00:17:54.762 "zone_management": false, 00:17:54.762 "zone_append": false, 00:17:54.762 "compare": true, 00:17:54.762 "compare_and_write": false, 00:17:54.762 "abort": true, 00:17:54.762 "seek_hole": false, 00:17:54.762 "seek_data": false, 00:17:54.762 "copy": true, 00:17:54.762 "nvme_iov_md": false 00:17:54.762 }, 00:17:54.762 "driver_specific": { 00:17:54.763 "nvme": [ 00:17:54.763 { 00:17:54.763 "pci_address": "0000:00:11.0", 00:17:54.763 "trid": { 00:17:54.763 "trtype": "PCIe", 00:17:54.763 "traddr": "0000:00:11.0" 00:17:54.763 }, 00:17:54.763 "ctrlr_data": { 00:17:54.763 "cntlid": 0, 00:17:54.763 "vendor_id": "0x1b36", 00:17:54.763 "model_number": "QEMU NVMe Ctrl", 00:17:54.763 "serial_number": "12341", 00:17:54.763 "firmware_revision": "8.0.0", 00:17:54.763 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:54.763 "oacs": { 00:17:54.763 "security": 0, 00:17:54.763 "format": 1, 00:17:54.763 "firmware": 0, 00:17:54.763 "ns_manage": 1 00:17:54.763 }, 00:17:54.763 "multi_ctrlr": false, 00:17:54.763 "ana_reporting": false 00:17:54.763 }, 00:17:54.763 "vs": { 00:17:54.763 "nvme_version": "1.4" 00:17:54.763 }, 00:17:54.763 "ns_data": { 00:17:54.763 "id": 1, 00:17:54.763 "can_share": false 00:17:54.763 } 00:17:54.763 } 00:17:54.763 ], 00:17:54.763 "mp_policy": "active_passive" 00:17:54.763 } 00:17:54.763 } 00:17:54.763 ]' 00:17:54.763 14:25:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:54.763 14:25:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:54.763 14:25:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:54.763 14:25:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:54.763 14:25:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:54.763 14:25:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:17:54.763 14:25:36 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:54.763 14:25:36 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:54.763 14:25:36 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:54.763 14:25:36 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:54.763 14:25:36 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:55.024 14:25:36 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=1edfd12d-7287-4c5e-b0f3-d91b7981eda7 00:17:55.024 14:25:36 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:55.024 14:25:36 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1edfd12d-7287-4c5e-b0f3-d91b7981eda7 00:17:55.285 14:25:36 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:55.547 14:25:37 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=3c32d854-be65-4395-be63-38e4078d06d0 00:17:55.547 14:25:37 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3c32d854-be65-4395-be63-38e4078d06d0 00:17:55.808 14:25:37 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=0d7ae28f-767e-4e15-a199-3a63b67a2894 00:17:55.808 14:25:37 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:55.808 14:25:37 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0d7ae28f-767e-4e15-a199-3a63b67a2894 00:17:55.808 14:25:37 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:55.808 14:25:37 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:55.808 14:25:37 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=0d7ae28f-767e-4e15-a199-3a63b67a2894 00:17:55.808 14:25:37 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:55.809 14:25:37 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 0d7ae28f-767e-4e15-a199-3a63b67a2894 00:17:55.809 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0d7ae28f-767e-4e15-a199-3a63b67a2894 00:17:55.809 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:55.809 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:55.809 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:55.809 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0d7ae28f-767e-4e15-a199-3a63b67a2894 00:17:56.071 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:56.071 { 00:17:56.071 "name": "0d7ae28f-767e-4e15-a199-3a63b67a2894", 00:17:56.071 "aliases": [ 00:17:56.071 "lvs/nvme0n1p0" 00:17:56.071 ], 00:17:56.071 "product_name": "Logical Volume", 00:17:56.071 "block_size": 4096, 00:17:56.071 "num_blocks": 26476544, 00:17:56.071 "uuid": "0d7ae28f-767e-4e15-a199-3a63b67a2894", 00:17:56.071 "assigned_rate_limits": { 00:17:56.071 "rw_ios_per_sec": 0, 00:17:56.071 "rw_mbytes_per_sec": 0, 00:17:56.071 "r_mbytes_per_sec": 0, 00:17:56.071 "w_mbytes_per_sec": 0 00:17:56.071 }, 00:17:56.071 "claimed": false, 00:17:56.071 "zoned": false, 00:17:56.071 "supported_io_types": { 00:17:56.071 "read": true, 00:17:56.071 "write": true, 00:17:56.071 "unmap": true, 00:17:56.071 "flush": false, 00:17:56.071 "reset": true, 00:17:56.071 "nvme_admin": false, 00:17:56.071 "nvme_io": false, 00:17:56.071 "nvme_io_md": false, 00:17:56.071 "write_zeroes": true, 00:17:56.071 "zcopy": false, 00:17:56.071 "get_zone_info": false, 00:17:56.071 "zone_management": false, 00:17:56.071 "zone_append": false, 00:17:56.071 "compare": false, 00:17:56.071 "compare_and_write": false, 00:17:56.071 "abort": false, 00:17:56.071 "seek_hole": true, 00:17:56.071 "seek_data": true, 00:17:56.071 "copy": false, 00:17:56.071 "nvme_iov_md": false 00:17:56.071 }, 00:17:56.071 "driver_specific": { 00:17:56.071 "lvol": { 00:17:56.071 "lvol_store_uuid": "3c32d854-be65-4395-be63-38e4078d06d0", 00:17:56.071 "base_bdev": "nvme0n1", 00:17:56.071 "thin_provision": true, 00:17:56.071 "num_allocated_clusters": 0, 00:17:56.071 "snapshot": false, 00:17:56.071 "clone": false, 00:17:56.071 "esnap_clone": false 00:17:56.071 } 00:17:56.071 } 00:17:56.071 } 00:17:56.071 ]' 00:17:56.071 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:56.071 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:56.071 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:56.071 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:56.071 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:56.071 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:56.071 14:25:37 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:56.071 14:25:37 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:56.071 14:25:37 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:56.332 14:25:37 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:56.332 14:25:37 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:56.332 14:25:37 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 0d7ae28f-767e-4e15-a199-3a63b67a2894 00:17:56.332 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0d7ae28f-767e-4e15-a199-3a63b67a2894 00:17:56.332 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:56.332 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:56.332 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:56.332 14:25:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0d7ae28f-767e-4e15-a199-3a63b67a2894 00:17:56.594 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:56.594 { 00:17:56.594 "name": "0d7ae28f-767e-4e15-a199-3a63b67a2894", 00:17:56.594 "aliases": [ 00:17:56.594 "lvs/nvme0n1p0" 00:17:56.594 ], 00:17:56.594 "product_name": "Logical Volume", 00:17:56.594 "block_size": 4096, 00:17:56.594 "num_blocks": 26476544, 00:17:56.594 "uuid": "0d7ae28f-767e-4e15-a199-3a63b67a2894", 00:17:56.594 "assigned_rate_limits": { 00:17:56.594 "rw_ios_per_sec": 0, 00:17:56.594 "rw_mbytes_per_sec": 0, 00:17:56.594 "r_mbytes_per_sec": 0, 00:17:56.594 "w_mbytes_per_sec": 0 00:17:56.594 }, 00:17:56.594 "claimed": false, 00:17:56.594 "zoned": false, 00:17:56.594 "supported_io_types": { 00:17:56.594 "read": true, 00:17:56.594 "write": true, 00:17:56.594 "unmap": true, 00:17:56.594 "flush": false, 00:17:56.594 "reset": true, 00:17:56.594 "nvme_admin": false, 00:17:56.594 "nvme_io": false, 00:17:56.594 "nvme_io_md": false, 00:17:56.594 "write_zeroes": true, 00:17:56.594 "zcopy": false, 00:17:56.594 "get_zone_info": false, 00:17:56.594 "zone_management": false, 00:17:56.594 "zone_append": false, 00:17:56.594 "compare": false, 00:17:56.594 "compare_and_write": false, 00:17:56.594 "abort": false, 00:17:56.594 "seek_hole": true, 00:17:56.594 "seek_data": true, 00:17:56.594 "copy": false, 00:17:56.594 "nvme_iov_md": false 00:17:56.594 }, 00:17:56.594 "driver_specific": { 00:17:56.594 "lvol": { 00:17:56.594 "lvol_store_uuid": "3c32d854-be65-4395-be63-38e4078d06d0", 00:17:56.594 "base_bdev": "nvme0n1", 00:17:56.594 "thin_provision": true, 00:17:56.594 "num_allocated_clusters": 0, 00:17:56.594 "snapshot": false, 00:17:56.594 "clone": false, 00:17:56.594 "esnap_clone": false 00:17:56.594 } 00:17:56.594 } 00:17:56.594 } 00:17:56.594 ]' 00:17:56.594 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:56.594 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:56.594 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:56.594 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:56.594 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:56.594 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:56.594 14:25:38 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:56.594 14:25:38 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:56.854 14:25:38 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:56.854 14:25:38 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 0d7ae28f-767e-4e15-a199-3a63b67a2894 00:17:56.854 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0d7ae28f-767e-4e15-a199-3a63b67a2894 00:17:56.854 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:56.854 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:56.854 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:56.854 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0d7ae28f-767e-4e15-a199-3a63b67a2894 00:17:57.116 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:57.116 { 00:17:57.116 "name": "0d7ae28f-767e-4e15-a199-3a63b67a2894", 00:17:57.116 "aliases": [ 00:17:57.116 "lvs/nvme0n1p0" 00:17:57.116 ], 00:17:57.116 "product_name": "Logical Volume", 00:17:57.116 "block_size": 4096, 00:17:57.116 "num_blocks": 26476544, 00:17:57.116 "uuid": "0d7ae28f-767e-4e15-a199-3a63b67a2894", 00:17:57.116 "assigned_rate_limits": { 00:17:57.116 "rw_ios_per_sec": 0, 00:17:57.116 "rw_mbytes_per_sec": 0, 00:17:57.116 "r_mbytes_per_sec": 0, 00:17:57.116 "w_mbytes_per_sec": 0 00:17:57.116 }, 00:17:57.116 "claimed": false, 00:17:57.116 "zoned": false, 00:17:57.116 "supported_io_types": { 00:17:57.116 "read": true, 00:17:57.116 "write": true, 00:17:57.116 "unmap": true, 00:17:57.116 "flush": false, 00:17:57.116 "reset": true, 00:17:57.116 "nvme_admin": false, 00:17:57.116 "nvme_io": false, 00:17:57.116 "nvme_io_md": false, 00:17:57.116 "write_zeroes": true, 00:17:57.116 "zcopy": false, 00:17:57.116 "get_zone_info": false, 00:17:57.116 "zone_management": false, 00:17:57.116 "zone_append": false, 00:17:57.116 "compare": false, 00:17:57.116 "compare_and_write": false, 00:17:57.116 "abort": false, 00:17:57.116 "seek_hole": true, 00:17:57.116 "seek_data": true, 00:17:57.116 "copy": false, 00:17:57.116 "nvme_iov_md": false 00:17:57.116 }, 00:17:57.116 "driver_specific": { 00:17:57.116 "lvol": { 00:17:57.116 "lvol_store_uuid": "3c32d854-be65-4395-be63-38e4078d06d0", 00:17:57.116 "base_bdev": "nvme0n1", 00:17:57.116 "thin_provision": true, 00:17:57.116 "num_allocated_clusters": 0, 00:17:57.116 "snapshot": false, 00:17:57.116 "clone": false, 00:17:57.116 "esnap_clone": false 00:17:57.116 } 00:17:57.116 } 00:17:57.116 } 00:17:57.116 ]' 00:17:57.116 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:57.116 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:57.116 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:57.116 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:57.116 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:57.116 14:25:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:57.116 14:25:38 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:57.116 14:25:38 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0d7ae28f-767e-4e15-a199-3a63b67a2894 --l2p_dram_limit 10' 00:17:57.116 14:25:38 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:57.116 14:25:38 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:57.116 14:25:38 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:57.116 14:25:38 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:57.116 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:57.116 14:25:38 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0d7ae28f-767e-4e15-a199-3a63b67a2894 --l2p_dram_limit 10 -c nvc0n1p0 00:17:57.116 [2024-11-29 14:25:38.896438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.116 [2024-11-29 14:25:38.896477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:57.116 [2024-11-29 14:25:38.896488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:57.116 [2024-11-29 14:25:38.896511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.116 [2024-11-29 14:25:38.896562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.116 [2024-11-29 14:25:38.896571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.116 [2024-11-29 14:25:38.896580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:57.116 [2024-11-29 14:25:38.896592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.116 [2024-11-29 14:25:38.896610] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:57.116 [2024-11-29 14:25:38.896800] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:57.116 [2024-11-29 14:25:38.896816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.116 [2024-11-29 14:25:38.896824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.116 [2024-11-29 14:25:38.896831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:17:57.116 [2024-11-29 14:25:38.896841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.116 [2024-11-29 14:25:38.896863] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9176fde2-a4ae-4ddb-9d8b-b6480a765c80 00:17:57.116 [2024-11-29 14:25:38.897808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.116 [2024-11-29 14:25:38.897826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:57.116 [2024-11-29 14:25:38.897834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:57.116 [2024-11-29 14:25:38.897841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.116 [2024-11-29 14:25:38.902582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.116 [2024-11-29 14:25:38.902607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.116 [2024-11-29 14:25:38.902619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:17:57.116 [2024-11-29 14:25:38.902626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.116 [2024-11-29 14:25:38.902684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.116 [2024-11-29 14:25:38.902691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.116 [2024-11-29 14:25:38.902700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:57.116 [2024-11-29 14:25:38.902708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.116 [2024-11-29 14:25:38.902743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.116 [2024-11-29 14:25:38.902750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:57.116 [2024-11-29 14:25:38.902758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:57.116 [2024-11-29 14:25:38.902767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.116 [2024-11-29 14:25:38.902784] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:57.116 [2024-11-29 14:25:38.904083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.116 [2024-11-29 14:25:38.904105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.116 [2024-11-29 14:25:38.904114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.304 ms 00:17:57.116 [2024-11-29 14:25:38.904122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.116 [2024-11-29 14:25:38.904148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.116 [2024-11-29 14:25:38.904155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:57.116 [2024-11-29 14:25:38.904161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:57.116 [2024-11-29 14:25:38.904170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.116 [2024-11-29 14:25:38.904201] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:57.116 [2024-11-29 14:25:38.904313] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:57.116 [2024-11-29 14:25:38.904321] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:57.116 [2024-11-29 14:25:38.904330] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:57.116 [2024-11-29 14:25:38.904338] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:57.116 [2024-11-29 14:25:38.904346] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:57.116 [2024-11-29 14:25:38.904352] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:57.116 [2024-11-29 14:25:38.904363] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:57.116 [2024-11-29 14:25:38.904368] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:57.116 [2024-11-29 14:25:38.904375] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:57.116 [2024-11-29 14:25:38.904384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.116 [2024-11-29 14:25:38.904393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:57.116 [2024-11-29 14:25:38.904399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:17:57.116 [2024-11-29 14:25:38.904406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.116 [2024-11-29 14:25:38.904470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.116 [2024-11-29 14:25:38.904478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:57.116 [2024-11-29 14:25:38.904483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:57.116 [2024-11-29 14:25:38.904503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.116 [2024-11-29 14:25:38.904576] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:57.116 [2024-11-29 14:25:38.904587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:57.116 [2024-11-29 14:25:38.904593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:57.116 [2024-11-29 14:25:38.904601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.116 [2024-11-29 14:25:38.904609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:57.116 [2024-11-29 14:25:38.904616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:57.116 [2024-11-29 14:25:38.904621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:57.116 [2024-11-29 14:25:38.904629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:57.116 [2024-11-29 14:25:38.904635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:57.116 [2024-11-29 14:25:38.904641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:57.116 [2024-11-29 14:25:38.904646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:57.116 [2024-11-29 14:25:38.904654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:57.116 [2024-11-29 14:25:38.904659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:57.116 [2024-11-29 14:25:38.904668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:57.116 [2024-11-29 14:25:38.904673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:57.116 [2024-11-29 14:25:38.904679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.116 [2024-11-29 14:25:38.904684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:57.116 [2024-11-29 14:25:38.904691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:57.116 [2024-11-29 14:25:38.904695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.116 [2024-11-29 14:25:38.904702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:57.116 [2024-11-29 14:25:38.904707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:57.116 [2024-11-29 14:25:38.904718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.116 [2024-11-29 14:25:38.904723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:57.116 [2024-11-29 14:25:38.904729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:57.116 [2024-11-29 14:25:38.904734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.117 [2024-11-29 14:25:38.904742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:57.117 [2024-11-29 14:25:38.904747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:57.117 [2024-11-29 14:25:38.904756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.117 [2024-11-29 14:25:38.904761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:57.117 [2024-11-29 14:25:38.904770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:57.117 [2024-11-29 14:25:38.904776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.117 [2024-11-29 14:25:38.904783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:57.117 [2024-11-29 14:25:38.904789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:57.117 [2024-11-29 14:25:38.904796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:57.117 [2024-11-29 14:25:38.904801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:57.117 [2024-11-29 14:25:38.904808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:57.117 [2024-11-29 14:25:38.904814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:57.117 [2024-11-29 14:25:38.904821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:57.117 [2024-11-29 14:25:38.904827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:57.117 [2024-11-29 14:25:38.904834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.117 [2024-11-29 14:25:38.904840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:57.117 [2024-11-29 14:25:38.904847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:57.117 [2024-11-29 14:25:38.904853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.117 [2024-11-29 14:25:38.904860] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:57.117 [2024-11-29 14:25:38.904869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:57.117 [2024-11-29 14:25:38.904878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:57.117 [2024-11-29 14:25:38.904884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.117 [2024-11-29 14:25:38.904892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:57.117 [2024-11-29 14:25:38.904898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:57.117 [2024-11-29 14:25:38.904905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:57.117 [2024-11-29 14:25:38.904911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:57.117 [2024-11-29 14:25:38.904918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:57.117 [2024-11-29 14:25:38.904924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:57.117 [2024-11-29 14:25:38.904934] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:57.117 [2024-11-29 14:25:38.904941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:57.117 [2024-11-29 14:25:38.904949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:57.117 [2024-11-29 14:25:38.904956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:57.117 [2024-11-29 14:25:38.904964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:57.117 [2024-11-29 14:25:38.904970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:57.117 [2024-11-29 14:25:38.904977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:57.117 [2024-11-29 14:25:38.904984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:57.117 [2024-11-29 14:25:38.904993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:57.117 [2024-11-29 14:25:38.904999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:57.117 [2024-11-29 14:25:38.905006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:57.117 [2024-11-29 14:25:38.905013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:57.117 [2024-11-29 14:25:38.905020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:57.117 [2024-11-29 14:25:38.905026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:57.117 [2024-11-29 14:25:38.905034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:57.117 [2024-11-29 14:25:38.905040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:57.117 [2024-11-29 14:25:38.905047] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:57.117 [2024-11-29 14:25:38.905056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:57.117 [2024-11-29 14:25:38.905064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:57.117 [2024-11-29 14:25:38.905071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:57.117 [2024-11-29 14:25:38.905079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:57.117 [2024-11-29 14:25:38.905085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:57.117 [2024-11-29 14:25:38.905094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.117 [2024-11-29 14:25:38.905100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:57.117 [2024-11-29 14:25:38.905109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:17:57.117 [2024-11-29 14:25:38.905115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.117 [2024-11-29 14:25:38.905145] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:57.117 [2024-11-29 14:25:38.905153] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:02.413 [2024-11-29 14:25:43.372927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.413 [2024-11-29 14:25:43.373047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:02.413 [2024-11-29 14:25:43.373076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4467.753 ms 00:18:02.413 [2024-11-29 14:25:43.373086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.413 [2024-11-29 14:25:43.393282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.413 [2024-11-29 14:25:43.393351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:02.413 [2024-11-29 14:25:43.393371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.046 ms 00:18:02.413 [2024-11-29 14:25:43.393381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.413 [2024-11-29 14:25:43.393574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.413 [2024-11-29 14:25:43.393587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:02.413 [2024-11-29 14:25:43.393606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:18:02.413 [2024-11-29 14:25:43.393615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.413 [2024-11-29 14:25:43.410635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.413 [2024-11-29 14:25:43.410689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:02.413 [2024-11-29 14:25:43.410711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.397 ms 00:18:02.413 [2024-11-29 14:25:43.410720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.413 [2024-11-29 14:25:43.410763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.413 [2024-11-29 14:25:43.410776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:02.413 [2024-11-29 14:25:43.410788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:02.413 [2024-11-29 14:25:43.410800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.413 [2024-11-29 14:25:43.411620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.413 [2024-11-29 14:25:43.411665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:02.413 [2024-11-29 14:25:43.411681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:18:02.413 [2024-11-29 14:25:43.411691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.413 [2024-11-29 14:25:43.411834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.413 [2024-11-29 14:25:43.411855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:02.413 [2024-11-29 14:25:43.411873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:02.413 [2024-11-29 14:25:43.411883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.413 [2024-11-29 14:25:43.440827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.413 [2024-11-29 14:25:43.440889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:02.413 [2024-11-29 14:25:43.440914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.911 ms 00:18:02.413 [2024-11-29 14:25:43.440924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.413 [2024-11-29 14:25:43.452542] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:02.413 [2024-11-29 14:25:43.457667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.413 [2024-11-29 14:25:43.457721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:02.413 [2024-11-29 14:25:43.457733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.622 ms 00:18:02.413 [2024-11-29 14:25:43.457745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.413 [2024-11-29 14:25:43.558374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.413 [2024-11-29 14:25:43.558444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:02.413 [2024-11-29 14:25:43.558459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.594 ms 00:18:02.413 [2024-11-29 14:25:43.558476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:43.558740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:43.558759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:02.414 [2024-11-29 14:25:43.558769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:18:02.414 [2024-11-29 14:25:43.558783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:43.565397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:43.565461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:02.414 [2024-11-29 14:25:43.565475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.560 ms 00:18:02.414 [2024-11-29 14:25:43.565487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:43.570974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:43.571035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:02.414 [2024-11-29 14:25:43.571047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.406 ms 00:18:02.414 [2024-11-29 14:25:43.571059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:43.571427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:43.571445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:02.414 [2024-11-29 14:25:43.571455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:18:02.414 [2024-11-29 14:25:43.571469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:43.622454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:43.622548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:02.414 [2024-11-29 14:25:43.622563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.929 ms 00:18:02.414 [2024-11-29 14:25:43.622575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:43.631112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:43.631175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:02.414 [2024-11-29 14:25:43.631189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.429 ms 00:18:02.414 [2024-11-29 14:25:43.631202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:43.637185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:43.637243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:02.414 [2024-11-29 14:25:43.637254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.930 ms 00:18:02.414 [2024-11-29 14:25:43.637265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:43.644126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:43.644186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:02.414 [2024-11-29 14:25:43.644199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.810 ms 00:18:02.414 [2024-11-29 14:25:43.644214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:43.644271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:43.644284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:02.414 [2024-11-29 14:25:43.644294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:02.414 [2024-11-29 14:25:43.644306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:43.644409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:43.644431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:02.414 [2024-11-29 14:25:43.644441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:02.414 [2024-11-29 14:25:43.644464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:43.646250] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4749.212 ms, result 0 00:18:02.414 { 00:18:02.414 "name": "ftl0", 00:18:02.414 "uuid": "9176fde2-a4ae-4ddb-9d8b-b6480a765c80" 00:18:02.414 } 00:18:02.414 14:25:43 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:02.414 14:25:43 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:02.414 14:25:43 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:02.414 14:25:43 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:02.414 [2024-11-29 14:25:44.085022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:44.085080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:02.414 [2024-11-29 14:25:44.085099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:02.414 [2024-11-29 14:25:44.085110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:44.085143] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:02.414 [2024-11-29 14:25:44.086172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:44.086221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:02.414 [2024-11-29 14:25:44.086234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:18:02.414 [2024-11-29 14:25:44.086252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:44.086543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:44.086559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:02.414 [2024-11-29 14:25:44.086570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:18:02.414 [2024-11-29 14:25:44.086582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:44.089856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:44.089893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:02.414 [2024-11-29 14:25:44.089902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:18:02.414 [2024-11-29 14:25:44.089913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:44.096271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:44.096321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:02.414 [2024-11-29 14:25:44.096334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.338 ms 00:18:02.414 [2024-11-29 14:25:44.096345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:44.099627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:44.099691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:02.414 [2024-11-29 14:25:44.099702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.180 ms 00:18:02.414 [2024-11-29 14:25:44.099713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:44.107140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:44.107394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:02.414 [2024-11-29 14:25:44.107417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.374 ms 00:18:02.414 [2024-11-29 14:25:44.107435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:44.107618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:44.107634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:02.414 [2024-11-29 14:25:44.107645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:18:02.414 [2024-11-29 14:25:44.107657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:44.111319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:44.111376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:02.414 [2024-11-29 14:25:44.111386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.637 ms 00:18:02.414 [2024-11-29 14:25:44.111397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:44.114482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:44.114557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:02.414 [2024-11-29 14:25:44.114568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.015 ms 00:18:02.414 [2024-11-29 14:25:44.114578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:44.117117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:44.117169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:02.414 [2024-11-29 14:25:44.117180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.486 ms 00:18:02.414 [2024-11-29 14:25:44.117191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:44.119630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.414 [2024-11-29 14:25:44.119689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:02.414 [2024-11-29 14:25:44.119700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.345 ms 00:18:02.414 [2024-11-29 14:25:44.119715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.414 [2024-11-29 14:25:44.119762] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:02.414 [2024-11-29 14:25:44.119783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:02.414 [2024-11-29 14:25:44.119795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:02.414 [2024-11-29 14:25:44.119806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:02.414 [2024-11-29 14:25:44.119815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:02.414 [2024-11-29 14:25:44.119830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:02.414 [2024-11-29 14:25:44.119840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:02.414 [2024-11-29 14:25:44.119850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:02.414 [2024-11-29 14:25:44.119858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:02.414 [2024-11-29 14:25:44.119869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:02.414 [2024-11-29 14:25:44.119878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:02.414 [2024-11-29 14:25:44.119889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:02.414 [2024-11-29 14:25:44.119897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.119908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.119916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.119926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.119934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.119944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.119954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.119964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.119972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.119985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.119994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:02.415 [2024-11-29 14:25:44.120779] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:02.415 [2024-11-29 14:25:44.120794] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9176fde2-a4ae-4ddb-9d8b-b6480a765c80 00:18:02.416 [2024-11-29 14:25:44.120805] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:02.416 [2024-11-29 14:25:44.120813] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:02.416 [2024-11-29 14:25:44.120823] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:02.416 [2024-11-29 14:25:44.120831] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:02.416 [2024-11-29 14:25:44.120841] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:02.416 [2024-11-29 14:25:44.120850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:02.416 [2024-11-29 14:25:44.120861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:02.416 [2024-11-29 14:25:44.120868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:02.416 [2024-11-29 14:25:44.120876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:02.416 [2024-11-29 14:25:44.120884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.416 [2024-11-29 14:25:44.120897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:02.416 [2024-11-29 14:25:44.120910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:18:02.416 [2024-11-29 14:25:44.120920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.124195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.416 [2024-11-29 14:25:44.124242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:02.416 [2024-11-29 14:25:44.124252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.253 ms 00:18:02.416 [2024-11-29 14:25:44.124265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.124411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.416 [2024-11-29 14:25:44.124432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:02.416 [2024-11-29 14:25:44.124443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:18:02.416 [2024-11-29 14:25:44.124454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.135853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.416 [2024-11-29 14:25:44.135912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:02.416 [2024-11-29 14:25:44.135926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.416 [2024-11-29 14:25:44.135938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.136024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.416 [2024-11-29 14:25:44.136037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:02.416 [2024-11-29 14:25:44.136046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.416 [2024-11-29 14:25:44.136060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.136152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.416 [2024-11-29 14:25:44.136173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:02.416 [2024-11-29 14:25:44.136183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.416 [2024-11-29 14:25:44.136195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.136214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.416 [2024-11-29 14:25:44.136231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:02.416 [2024-11-29 14:25:44.136239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.416 [2024-11-29 14:25:44.136253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.156473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.416 [2024-11-29 14:25:44.156577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:02.416 [2024-11-29 14:25:44.156591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.416 [2024-11-29 14:25:44.156604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.173554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.416 [2024-11-29 14:25:44.173894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:02.416 [2024-11-29 14:25:44.173916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.416 [2024-11-29 14:25:44.173933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.174043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.416 [2024-11-29 14:25:44.174063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:02.416 [2024-11-29 14:25:44.174073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.416 [2024-11-29 14:25:44.174085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.174140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.416 [2024-11-29 14:25:44.174154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:02.416 [2024-11-29 14:25:44.174166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.416 [2024-11-29 14:25:44.174177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.174289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.416 [2024-11-29 14:25:44.174304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:02.416 [2024-11-29 14:25:44.174321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.416 [2024-11-29 14:25:44.174333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.174380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.416 [2024-11-29 14:25:44.174396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:02.416 [2024-11-29 14:25:44.174408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.416 [2024-11-29 14:25:44.174423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.174480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.416 [2024-11-29 14:25:44.174532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:02.416 [2024-11-29 14:25:44.174543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.416 [2024-11-29 14:25:44.174557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.174625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.416 [2024-11-29 14:25:44.174642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:02.416 [2024-11-29 14:25:44.174655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.416 [2024-11-29 14:25:44.174668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.416 [2024-11-29 14:25:44.174857] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 89.776 ms, result 0 00:18:02.416 true 00:18:02.416 14:25:44 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 86241 00:18:02.416 14:25:44 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86241 ']' 00:18:02.416 14:25:44 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86241 00:18:02.416 14:25:44 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:02.678 14:25:44 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:02.678 14:25:44 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86241 00:18:02.678 killing process with pid 86241 00:18:02.678 14:25:44 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:02.678 14:25:44 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:02.678 14:25:44 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86241' 00:18:02.678 14:25:44 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 86241 00:18:02.678 14:25:44 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 86241 00:18:07.969 14:25:49 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:12.171 262144+0 records in 00:18:12.171 262144+0 records out 00:18:12.171 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.65318 s, 294 MB/s 00:18:12.171 14:25:53 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:13.549 14:25:55 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:13.809 [2024-11-29 14:25:55.385746] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:13.809 [2024-11-29 14:25:55.386099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86471 ] 00:18:13.809 [2024-11-29 14:25:55.539954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.071 [2024-11-29 14:25:55.614256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.071 [2024-11-29 14:25:55.764950] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:14.071 [2024-11-29 14:25:55.765040] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:14.335 [2024-11-29 14:25:55.929475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.335 [2024-11-29 14:25:55.929551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:14.335 [2024-11-29 14:25:55.929572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:14.335 [2024-11-29 14:25:55.929582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.335 [2024-11-29 14:25:55.929646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.335 [2024-11-29 14:25:55.929659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:14.335 [2024-11-29 14:25:55.929670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:14.335 [2024-11-29 14:25:55.929679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.335 [2024-11-29 14:25:55.929707] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:14.335 [2024-11-29 14:25:55.930150] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:14.335 [2024-11-29 14:25:55.930195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.335 [2024-11-29 14:25:55.930207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:14.335 [2024-11-29 14:25:55.930222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:18:14.335 [2024-11-29 14:25:55.930238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.335 [2024-11-29 14:25:55.932561] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:14.335 [2024-11-29 14:25:55.937211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.335 [2024-11-29 14:25:55.937262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:14.335 [2024-11-29 14:25:55.937275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.652 ms 00:18:14.336 [2024-11-29 14:25:55.937284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.336 [2024-11-29 14:25:55.937375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.336 [2024-11-29 14:25:55.937386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:14.336 [2024-11-29 14:25:55.937399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:14.336 [2024-11-29 14:25:55.937408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.336 [2024-11-29 14:25:55.949152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.336 [2024-11-29 14:25:55.949206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:14.336 [2024-11-29 14:25:55.949219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.692 ms 00:18:14.336 [2024-11-29 14:25:55.949233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.336 [2024-11-29 14:25:55.949346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.336 [2024-11-29 14:25:55.949357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:14.336 [2024-11-29 14:25:55.949372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:14.336 [2024-11-29 14:25:55.949386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.336 [2024-11-29 14:25:55.949455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.336 [2024-11-29 14:25:55.949467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:14.336 [2024-11-29 14:25:55.949478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:14.336 [2024-11-29 14:25:55.949513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.336 [2024-11-29 14:25:55.949545] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:14.336 [2024-11-29 14:25:55.952231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.336 [2024-11-29 14:25:55.952274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:14.336 [2024-11-29 14:25:55.952284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.696 ms 00:18:14.336 [2024-11-29 14:25:55.952293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.336 [2024-11-29 14:25:55.952332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.336 [2024-11-29 14:25:55.952342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:14.336 [2024-11-29 14:25:55.952352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:14.336 [2024-11-29 14:25:55.952361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.336 [2024-11-29 14:25:55.952395] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:14.336 [2024-11-29 14:25:55.952427] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:14.336 [2024-11-29 14:25:55.952473] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:14.336 [2024-11-29 14:25:55.952511] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:14.336 [2024-11-29 14:25:55.952629] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:14.336 [2024-11-29 14:25:55.952644] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:14.336 [2024-11-29 14:25:55.952656] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:14.336 [2024-11-29 14:25:55.952672] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:14.336 [2024-11-29 14:25:55.952688] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:14.336 [2024-11-29 14:25:55.952698] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:14.336 [2024-11-29 14:25:55.952709] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:14.336 [2024-11-29 14:25:55.952717] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:14.336 [2024-11-29 14:25:55.952726] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:14.336 [2024-11-29 14:25:55.952736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.336 [2024-11-29 14:25:55.952749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:14.336 [2024-11-29 14:25:55.952758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:18:14.336 [2024-11-29 14:25:55.952771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.336 [2024-11-29 14:25:55.952862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.336 [2024-11-29 14:25:55.952876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:14.336 [2024-11-29 14:25:55.952885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:14.336 [2024-11-29 14:25:55.952895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.336 [2024-11-29 14:25:55.952999] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:14.336 [2024-11-29 14:25:55.953012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:14.336 [2024-11-29 14:25:55.953022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:14.336 [2024-11-29 14:25:55.953041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.336 [2024-11-29 14:25:55.953051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:14.336 [2024-11-29 14:25:55.953059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:14.336 [2024-11-29 14:25:55.953068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:14.336 [2024-11-29 14:25:55.953077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:14.336 [2024-11-29 14:25:55.953086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:14.336 [2024-11-29 14:25:55.953094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:14.336 [2024-11-29 14:25:55.953105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:14.336 [2024-11-29 14:25:55.953114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:14.336 [2024-11-29 14:25:55.953125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:14.336 [2024-11-29 14:25:55.953134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:14.336 [2024-11-29 14:25:55.953147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:14.336 [2024-11-29 14:25:55.953156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.336 [2024-11-29 14:25:55.953164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:14.336 [2024-11-29 14:25:55.953174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:14.336 [2024-11-29 14:25:55.953182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.336 [2024-11-29 14:25:55.953190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:14.336 [2024-11-29 14:25:55.953198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:14.336 [2024-11-29 14:25:55.953206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.336 [2024-11-29 14:25:55.953214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:14.336 [2024-11-29 14:25:55.953223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:14.336 [2024-11-29 14:25:55.953231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.336 [2024-11-29 14:25:55.953240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:14.336 [2024-11-29 14:25:55.953248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:14.336 [2024-11-29 14:25:55.953255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.336 [2024-11-29 14:25:55.953271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:14.336 [2024-11-29 14:25:55.953280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:14.336 [2024-11-29 14:25:55.953289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.336 [2024-11-29 14:25:55.953296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:14.336 [2024-11-29 14:25:55.953303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:14.336 [2024-11-29 14:25:55.953309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:14.336 [2024-11-29 14:25:55.953317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:14.336 [2024-11-29 14:25:55.953323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:14.336 [2024-11-29 14:25:55.953331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:14.336 [2024-11-29 14:25:55.953338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:14.336 [2024-11-29 14:25:55.953344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:14.336 [2024-11-29 14:25:55.953351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.336 [2024-11-29 14:25:55.953357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:14.336 [2024-11-29 14:25:55.953364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:14.336 [2024-11-29 14:25:55.953371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.336 [2024-11-29 14:25:55.953378] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:14.336 [2024-11-29 14:25:55.953389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:14.336 [2024-11-29 14:25:55.953402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:14.336 [2024-11-29 14:25:55.953415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.336 [2024-11-29 14:25:55.953425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:14.336 [2024-11-29 14:25:55.953432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:14.336 [2024-11-29 14:25:55.953439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:14.336 [2024-11-29 14:25:55.953459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:14.336 [2024-11-29 14:25:55.953466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:14.336 [2024-11-29 14:25:55.953473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:14.336 [2024-11-29 14:25:55.953483] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:14.336 [2024-11-29 14:25:55.953529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:14.336 [2024-11-29 14:25:55.953541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:14.337 [2024-11-29 14:25:55.953550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:14.337 [2024-11-29 14:25:55.953557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:14.337 [2024-11-29 14:25:55.953565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:14.337 [2024-11-29 14:25:55.953573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:14.337 [2024-11-29 14:25:55.953585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:14.337 [2024-11-29 14:25:55.953594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:14.337 [2024-11-29 14:25:55.953602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:14.337 [2024-11-29 14:25:55.953609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:14.337 [2024-11-29 14:25:55.953617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:14.337 [2024-11-29 14:25:55.953625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:14.337 [2024-11-29 14:25:55.953632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:14.337 [2024-11-29 14:25:55.953641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:14.337 [2024-11-29 14:25:55.953649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:14.337 [2024-11-29 14:25:55.953657] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:14.337 [2024-11-29 14:25:55.953666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:14.337 [2024-11-29 14:25:55.953678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:14.337 [2024-11-29 14:25:55.953685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:14.337 [2024-11-29 14:25:55.953694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:14.337 [2024-11-29 14:25:55.953702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:14.337 [2024-11-29 14:25:55.953714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:55.953726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:14.337 [2024-11-29 14:25:55.953735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:18:14.337 [2024-11-29 14:25:55.953748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:55.982748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:55.983115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:14.337 [2024-11-29 14:25:55.983154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.928 ms 00:18:14.337 [2024-11-29 14:25:55.983168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:55.983295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:55.983317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:14.337 [2024-11-29 14:25:55.983336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:14.337 [2024-11-29 14:25:55.983347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:55.999626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:55.999677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:14.337 [2024-11-29 14:25:55.999690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.195 ms 00:18:14.337 [2024-11-29 14:25:55.999699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:55.999740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:55.999756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:14.337 [2024-11-29 14:25:55.999766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:14.337 [2024-11-29 14:25:55.999779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.000537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.000582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:14.337 [2024-11-29 14:25:56.000602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:18:14.337 [2024-11-29 14:25:56.000614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.000792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.000814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:14.337 [2024-11-29 14:25:56.000823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:18:14.337 [2024-11-29 14:25:56.000833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.010547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.010590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:14.337 [2024-11-29 14:25:56.010610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.689 ms 00:18:14.337 [2024-11-29 14:25:56.010619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.015203] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:14.337 [2024-11-29 14:25:56.015260] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:14.337 [2024-11-29 14:25:56.015274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.015285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:14.337 [2024-11-29 14:25:56.015296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.536 ms 00:18:14.337 [2024-11-29 14:25:56.015304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.031937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.032015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:14.337 [2024-11-29 14:25:56.032031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.576 ms 00:18:14.337 [2024-11-29 14:25:56.032043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.035195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.035245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:14.337 [2024-11-29 14:25:56.035256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.094 ms 00:18:14.337 [2024-11-29 14:25:56.035264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.039861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.039979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:14.337 [2024-11-29 14:25:56.040013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.539 ms 00:18:14.337 [2024-11-29 14:25:56.040038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.041124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.041193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:14.337 [2024-11-29 14:25:56.041222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:18:14.337 [2024-11-29 14:25:56.041245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.069009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.069082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:14.337 [2024-11-29 14:25:56.069101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.659 ms 00:18:14.337 [2024-11-29 14:25:56.069110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.077223] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:14.337 [2024-11-29 14:25:56.080447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.080510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:14.337 [2024-11-29 14:25:56.080522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.281 ms 00:18:14.337 [2024-11-29 14:25:56.080537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.080631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.080648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:14.337 [2024-11-29 14:25:56.080658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:14.337 [2024-11-29 14:25:56.080667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.080768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.080780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:14.337 [2024-11-29 14:25:56.080789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:14.337 [2024-11-29 14:25:56.080797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.080828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.080837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:14.337 [2024-11-29 14:25:56.080847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:14.337 [2024-11-29 14:25:56.080861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.080898] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:14.337 [2024-11-29 14:25:56.080912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.080921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:14.337 [2024-11-29 14:25:56.080930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:14.337 [2024-11-29 14:25:56.080938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.086708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.086772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:14.337 [2024-11-29 14:25:56.086783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.748 ms 00:18:14.337 [2024-11-29 14:25:56.086791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.337 [2024-11-29 14:25:56.086878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.337 [2024-11-29 14:25:56.086889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:14.338 [2024-11-29 14:25:56.086904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:14.338 [2024-11-29 14:25:56.086911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.338 [2024-11-29 14:25:56.088124] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.165 ms, result 0 00:18:15.723  [2024-11-29T14:25:58.463Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-29T14:25:59.407Z] Copying: 53/1024 [MB] (36 MBps) [2024-11-29T14:26:00.353Z] Copying: 78/1024 [MB] (25 MBps) [2024-11-29T14:26:01.299Z] Copying: 92/1024 [MB] (14 MBps) [2024-11-29T14:26:02.291Z] Copying: 105/1024 [MB] (12 MBps) [2024-11-29T14:26:03.247Z] Copying: 131/1024 [MB] (25 MBps) [2024-11-29T14:26:04.192Z] Copying: 145/1024 [MB] (14 MBps) [2024-11-29T14:26:05.135Z] Copying: 165/1024 [MB] (19 MBps) [2024-11-29T14:26:06.518Z] Copying: 199/1024 [MB] (33 MBps) [2024-11-29T14:26:07.461Z] Copying: 251/1024 [MB] (51 MBps) [2024-11-29T14:26:08.406Z] Copying: 286/1024 [MB] (35 MBps) [2024-11-29T14:26:09.349Z] Copying: 305/1024 [MB] (19 MBps) [2024-11-29T14:26:10.293Z] Copying: 319/1024 [MB] (14 MBps) [2024-11-29T14:26:11.235Z] Copying: 338/1024 [MB] (19 MBps) [2024-11-29T14:26:12.178Z] Copying: 354/1024 [MB] (15 MBps) [2024-11-29T14:26:13.121Z] Copying: 369/1024 [MB] (15 MBps) [2024-11-29T14:26:14.507Z] Copying: 382/1024 [MB] (12 MBps) [2024-11-29T14:26:15.450Z] Copying: 396/1024 [MB] (14 MBps) [2024-11-29T14:26:16.396Z] Copying: 417/1024 [MB] (20 MBps) [2024-11-29T14:26:17.341Z] Copying: 430/1024 [MB] (12 MBps) [2024-11-29T14:26:18.284Z] Copying: 442/1024 [MB] (12 MBps) [2024-11-29T14:26:19.227Z] Copying: 453/1024 [MB] (10 MBps) [2024-11-29T14:26:20.167Z] Copying: 469/1024 [MB] (16 MBps) [2024-11-29T14:26:21.110Z] Copying: 483/1024 [MB] (13 MBps) [2024-11-29T14:26:22.498Z] Copying: 504896/1048576 [kB] (10152 kBps) [2024-11-29T14:26:23.441Z] Copying: 503/1024 [MB] (10 MBps) [2024-11-29T14:26:24.385Z] Copying: 513/1024 [MB] (10 MBps) [2024-11-29T14:26:25.332Z] Copying: 523/1024 [MB] (10 MBps) [2024-11-29T14:26:26.275Z] Copying: 551/1024 [MB] (27 MBps) [2024-11-29T14:26:27.219Z] Copying: 591/1024 [MB] (40 MBps) [2024-11-29T14:26:28.160Z] Copying: 617/1024 [MB] (25 MBps) [2024-11-29T14:26:29.103Z] Copying: 668/1024 [MB] (51 MBps) [2024-11-29T14:26:30.527Z] Copying: 695/1024 [MB] (26 MBps) [2024-11-29T14:26:31.119Z] Copying: 726/1024 [MB] (30 MBps) [2024-11-29T14:26:32.505Z] Copying: 760/1024 [MB] (34 MBps) [2024-11-29T14:26:33.447Z] Copying: 779/1024 [MB] (19 MBps) [2024-11-29T14:26:34.391Z] Copying: 801/1024 [MB] (21 MBps) [2024-11-29T14:26:35.335Z] Copying: 821/1024 [MB] (19 MBps) [2024-11-29T14:26:36.277Z] Copying: 840/1024 [MB] (19 MBps) [2024-11-29T14:26:37.223Z] Copying: 862/1024 [MB] (21 MBps) [2024-11-29T14:26:38.164Z] Copying: 883/1024 [MB] (21 MBps) [2024-11-29T14:26:39.104Z] Copying: 900/1024 [MB] (16 MBps) [2024-11-29T14:26:40.488Z] Copying: 923/1024 [MB] (22 MBps) [2024-11-29T14:26:41.425Z] Copying: 938/1024 [MB] (14 MBps) [2024-11-29T14:26:42.357Z] Copying: 950/1024 [MB] (12 MBps) [2024-11-29T14:26:43.290Z] Copying: 962/1024 [MB] (12 MBps) [2024-11-29T14:26:44.222Z] Copying: 974/1024 [MB] (11 MBps) [2024-11-29T14:26:45.153Z] Copying: 985/1024 [MB] (11 MBps) [2024-11-29T14:26:46.525Z] Copying: 997/1024 [MB] (11 MBps) [2024-11-29T14:26:47.461Z] Copying: 1009/1024 [MB] (12 MBps) [2024-11-29T14:26:47.462Z] Copying: 1021/1024 [MB] (12 MBps) [2024-11-29T14:26:47.462Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-29 14:26:47.280828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.668 [2024-11-29 14:26:47.280866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:05.668 [2024-11-29 14:26:47.280883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:05.668 [2024-11-29 14:26:47.280892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.668 [2024-11-29 14:26:47.280910] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:05.668 [2024-11-29 14:26:47.281438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.668 [2024-11-29 14:26:47.281454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:05.668 [2024-11-29 14:26:47.281466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:19:05.668 [2024-11-29 14:26:47.281474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.668 [2024-11-29 14:26:47.283569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.668 [2024-11-29 14:26:47.283597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:05.668 [2024-11-29 14:26:47.283605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.080 ms 00:19:05.668 [2024-11-29 14:26:47.283611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.668 [2024-11-29 14:26:47.299102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.668 [2024-11-29 14:26:47.299220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:05.668 [2024-11-29 14:26:47.299235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.477 ms 00:19:05.668 [2024-11-29 14:26:47.299242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.668 [2024-11-29 14:26:47.303919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.668 [2024-11-29 14:26:47.303942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:05.668 [2024-11-29 14:26:47.303950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.653 ms 00:19:05.668 [2024-11-29 14:26:47.303957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.668 [2024-11-29 14:26:47.306058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.668 [2024-11-29 14:26:47.306161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:05.668 [2024-11-29 14:26:47.306174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:19:05.668 [2024-11-29 14:26:47.306180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.668 [2024-11-29 14:26:47.310154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.668 [2024-11-29 14:26:47.310186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:05.668 [2024-11-29 14:26:47.310198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.950 ms 00:19:05.668 [2024-11-29 14:26:47.310204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.668 [2024-11-29 14:26:47.310288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.668 [2024-11-29 14:26:47.310296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:05.668 [2024-11-29 14:26:47.310303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:05.668 [2024-11-29 14:26:47.310309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.668 [2024-11-29 14:26:47.312944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.668 [2024-11-29 14:26:47.313039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:05.668 [2024-11-29 14:26:47.313050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:19:05.668 [2024-11-29 14:26:47.313056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.668 [2024-11-29 14:26:47.314899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.668 [2024-11-29 14:26:47.314931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:05.668 [2024-11-29 14:26:47.314952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.822 ms 00:19:05.668 [2024-11-29 14:26:47.314958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.668 [2024-11-29 14:26:47.316603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.668 [2024-11-29 14:26:47.316627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:05.668 [2024-11-29 14:26:47.316633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.622 ms 00:19:05.668 [2024-11-29 14:26:47.316638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.668 [2024-11-29 14:26:47.318099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.668 [2024-11-29 14:26:47.318123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:05.668 [2024-11-29 14:26:47.318130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:19:05.668 [2024-11-29 14:26:47.318136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.668 [2024-11-29 14:26:47.318158] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:05.668 [2024-11-29 14:26:47.318169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:05.668 [2024-11-29 14:26:47.318439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:05.669 [2024-11-29 14:26:47.318795] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:05.669 [2024-11-29 14:26:47.318802] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9176fde2-a4ae-4ddb-9d8b-b6480a765c80 00:19:05.669 [2024-11-29 14:26:47.318808] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:05.669 [2024-11-29 14:26:47.318814] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:05.669 [2024-11-29 14:26:47.318820] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:05.669 [2024-11-29 14:26:47.318826] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:05.669 [2024-11-29 14:26:47.318832] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:05.669 [2024-11-29 14:26:47.318838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:05.669 [2024-11-29 14:26:47.318844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:05.669 [2024-11-29 14:26:47.318849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:05.669 [2024-11-29 14:26:47.318853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:05.669 [2024-11-29 14:26:47.318859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.669 [2024-11-29 14:26:47.318864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:05.669 [2024-11-29 14:26:47.318871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:19:05.669 [2024-11-29 14:26:47.318881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.669 [2024-11-29 14:26:47.320609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.669 [2024-11-29 14:26:47.320634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:05.669 [2024-11-29 14:26:47.320641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.717 ms 00:19:05.669 [2024-11-29 14:26:47.320648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.669 [2024-11-29 14:26:47.320736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.669 [2024-11-29 14:26:47.320747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:05.669 [2024-11-29 14:26:47.320756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:05.669 [2024-11-29 14:26:47.320762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.669 [2024-11-29 14:26:47.325863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.669 [2024-11-29 14:26:47.325894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:05.669 [2024-11-29 14:26:47.325903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.669 [2024-11-29 14:26:47.325909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.669 [2024-11-29 14:26:47.325957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.669 [2024-11-29 14:26:47.325965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:05.669 [2024-11-29 14:26:47.325974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.669 [2024-11-29 14:26:47.325980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.669 [2024-11-29 14:26:47.326009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.669 [2024-11-29 14:26:47.326017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:05.669 [2024-11-29 14:26:47.326026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.669 [2024-11-29 14:26:47.326032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.669 [2024-11-29 14:26:47.326046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.669 [2024-11-29 14:26:47.326053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:05.669 [2024-11-29 14:26:47.326059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.669 [2024-11-29 14:26:47.326067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.669 [2024-11-29 14:26:47.336662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.669 [2024-11-29 14:26:47.336694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:05.669 [2024-11-29 14:26:47.336703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.669 [2024-11-29 14:26:47.336709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.669 [2024-11-29 14:26:47.345227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.669 [2024-11-29 14:26:47.345261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:05.670 [2024-11-29 14:26:47.345276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.670 [2024-11-29 14:26:47.345283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.670 [2024-11-29 14:26:47.345348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.670 [2024-11-29 14:26:47.345356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:05.670 [2024-11-29 14:26:47.345363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.670 [2024-11-29 14:26:47.345369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.670 [2024-11-29 14:26:47.345390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.670 [2024-11-29 14:26:47.345401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:05.670 [2024-11-29 14:26:47.345408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.670 [2024-11-29 14:26:47.345415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.670 [2024-11-29 14:26:47.345474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.670 [2024-11-29 14:26:47.345482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:05.670 [2024-11-29 14:26:47.345504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.670 [2024-11-29 14:26:47.345511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.670 [2024-11-29 14:26:47.345535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.670 [2024-11-29 14:26:47.345543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:05.670 [2024-11-29 14:26:47.345549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.670 [2024-11-29 14:26:47.345555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.670 [2024-11-29 14:26:47.345591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.670 [2024-11-29 14:26:47.345599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:05.670 [2024-11-29 14:26:47.345605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.670 [2024-11-29 14:26:47.345615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.670 [2024-11-29 14:26:47.345657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.670 [2024-11-29 14:26:47.345665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:05.670 [2024-11-29 14:26:47.345672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.670 [2024-11-29 14:26:47.345679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.670 [2024-11-29 14:26:47.345795] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.935 ms, result 0 00:19:06.239 00:19:06.239 00:19:06.239 14:26:47 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:06.239 [2024-11-29 14:26:48.012022] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:06.239 [2024-11-29 14:26:48.012275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87016 ] 00:19:06.499 [2024-11-29 14:26:48.162867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.499 [2024-11-29 14:26:48.229051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.759 [2024-11-29 14:26:48.378482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:06.759 [2024-11-29 14:26:48.378598] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:06.759 [2024-11-29 14:26:48.542830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.759 [2024-11-29 14:26:48.542898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:06.759 [2024-11-29 14:26:48.542918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:06.759 [2024-11-29 14:26:48.542928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.759 [2024-11-29 14:26:48.543005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.759 [2024-11-29 14:26:48.543017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:06.760 [2024-11-29 14:26:48.543026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:06.760 [2024-11-29 14:26:48.543035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.760 [2024-11-29 14:26:48.543062] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:06.760 [2024-11-29 14:26:48.543364] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:06.760 [2024-11-29 14:26:48.543382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.760 [2024-11-29 14:26:48.543393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:06.760 [2024-11-29 14:26:48.543407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:19:06.760 [2024-11-29 14:26:48.543418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.760 [2024-11-29 14:26:48.545741] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:06.760 [2024-11-29 14:26:48.550748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.760 [2024-11-29 14:26:48.550814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:06.760 [2024-11-29 14:26:48.550826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.009 ms 00:19:06.760 [2024-11-29 14:26:48.550839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.760 [2024-11-29 14:26:48.550964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.760 [2024-11-29 14:26:48.550977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:06.760 [2024-11-29 14:26:48.550990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:06.760 [2024-11-29 14:26:48.550999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.022 [2024-11-29 14:26:48.562989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.022 [2024-11-29 14:26:48.563046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:07.022 [2024-11-29 14:26:48.563063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.936 ms 00:19:07.022 [2024-11-29 14:26:48.563077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.022 [2024-11-29 14:26:48.563188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.022 [2024-11-29 14:26:48.563199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:07.022 [2024-11-29 14:26:48.563209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:07.022 [2024-11-29 14:26:48.563220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.022 [2024-11-29 14:26:48.563288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.022 [2024-11-29 14:26:48.563300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:07.022 [2024-11-29 14:26:48.563310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:07.022 [2024-11-29 14:26:48.563318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.022 [2024-11-29 14:26:48.563346] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:07.023 [2024-11-29 14:26:48.566087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.023 [2024-11-29 14:26:48.566130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:07.023 [2024-11-29 14:26:48.566150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.749 ms 00:19:07.023 [2024-11-29 14:26:48.566160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.023 [2024-11-29 14:26:48.566205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.023 [2024-11-29 14:26:48.566216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:07.023 [2024-11-29 14:26:48.566225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:07.023 [2024-11-29 14:26:48.566240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.023 [2024-11-29 14:26:48.566265] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:07.023 [2024-11-29 14:26:48.566304] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:07.023 [2024-11-29 14:26:48.566354] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:07.023 [2024-11-29 14:26:48.566380] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:07.023 [2024-11-29 14:26:48.566514] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:07.023 [2024-11-29 14:26:48.566528] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:07.023 [2024-11-29 14:26:48.566542] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:07.023 [2024-11-29 14:26:48.566554] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:07.023 [2024-11-29 14:26:48.566571] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:07.023 [2024-11-29 14:26:48.566582] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:07.023 [2024-11-29 14:26:48.566591] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:07.023 [2024-11-29 14:26:48.566599] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:07.023 [2024-11-29 14:26:48.566608] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:07.023 [2024-11-29 14:26:48.566620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.023 [2024-11-29 14:26:48.566631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:07.023 [2024-11-29 14:26:48.566639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:19:07.023 [2024-11-29 14:26:48.566647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.023 [2024-11-29 14:26:48.566735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.023 [2024-11-29 14:26:48.566753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:07.023 [2024-11-29 14:26:48.566768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:07.023 [2024-11-29 14:26:48.566779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.023 [2024-11-29 14:26:48.566883] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:07.023 [2024-11-29 14:26:48.566898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:07.023 [2024-11-29 14:26:48.566911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.023 [2024-11-29 14:26:48.566949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.023 [2024-11-29 14:26:48.566958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:07.023 [2024-11-29 14:26:48.566967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:07.023 [2024-11-29 14:26:48.566977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:07.023 [2024-11-29 14:26:48.566986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:07.023 [2024-11-29 14:26:48.566995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:07.023 [2024-11-29 14:26:48.567003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.023 [2024-11-29 14:26:48.567011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:07.023 [2024-11-29 14:26:48.567021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:07.023 [2024-11-29 14:26:48.567033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.023 [2024-11-29 14:26:48.567041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:07.023 [2024-11-29 14:26:48.567050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:07.023 [2024-11-29 14:26:48.567058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.023 [2024-11-29 14:26:48.567066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:07.023 [2024-11-29 14:26:48.567076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:07.023 [2024-11-29 14:26:48.567087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.023 [2024-11-29 14:26:48.567097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:07.023 [2024-11-29 14:26:48.567106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:07.023 [2024-11-29 14:26:48.567115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.023 [2024-11-29 14:26:48.567124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:07.023 [2024-11-29 14:26:48.567131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:07.023 [2024-11-29 14:26:48.567139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.023 [2024-11-29 14:26:48.567146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:07.023 [2024-11-29 14:26:48.567153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:07.023 [2024-11-29 14:26:48.567160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.023 [2024-11-29 14:26:48.567172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:07.023 [2024-11-29 14:26:48.567179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:07.023 [2024-11-29 14:26:48.567189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.023 [2024-11-29 14:26:48.567196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:07.023 [2024-11-29 14:26:48.567203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:07.023 [2024-11-29 14:26:48.567211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.023 [2024-11-29 14:26:48.567218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:07.023 [2024-11-29 14:26:48.567225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:07.023 [2024-11-29 14:26:48.567232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.023 [2024-11-29 14:26:48.567239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:07.023 [2024-11-29 14:26:48.567247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:07.023 [2024-11-29 14:26:48.567256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.023 [2024-11-29 14:26:48.567262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:07.023 [2024-11-29 14:26:48.567269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:07.023 [2024-11-29 14:26:48.567276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.023 [2024-11-29 14:26:48.567283] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:07.023 [2024-11-29 14:26:48.567300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:07.023 [2024-11-29 14:26:48.567308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.023 [2024-11-29 14:26:48.567320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.023 [2024-11-29 14:26:48.567328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:07.023 [2024-11-29 14:26:48.567337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:07.023 [2024-11-29 14:26:48.567344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:07.023 [2024-11-29 14:26:48.567351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:07.023 [2024-11-29 14:26:48.567360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:07.023 [2024-11-29 14:26:48.567368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:07.023 [2024-11-29 14:26:48.567377] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:07.023 [2024-11-29 14:26:48.567389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.023 [2024-11-29 14:26:48.567398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:07.023 [2024-11-29 14:26:48.567406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:07.023 [2024-11-29 14:26:48.567412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:07.023 [2024-11-29 14:26:48.567420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:07.023 [2024-11-29 14:26:48.567431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:07.023 [2024-11-29 14:26:48.567441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:07.023 [2024-11-29 14:26:48.567449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:07.023 [2024-11-29 14:26:48.567456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:07.023 [2024-11-29 14:26:48.567464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:07.023 [2024-11-29 14:26:48.567473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:07.023 [2024-11-29 14:26:48.567481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:07.024 [2024-11-29 14:26:48.567489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:07.024 [2024-11-29 14:26:48.567515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:07.024 [2024-11-29 14:26:48.567525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:07.024 [2024-11-29 14:26:48.567534] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:07.024 [2024-11-29 14:26:48.567543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.024 [2024-11-29 14:26:48.567553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:07.024 [2024-11-29 14:26:48.567562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:07.024 [2024-11-29 14:26:48.567571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:07.024 [2024-11-29 14:26:48.567581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:07.024 [2024-11-29 14:26:48.567589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.567603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:07.024 [2024-11-29 14:26:48.567612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:19:07.024 [2024-11-29 14:26:48.567621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.595673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.595991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:07.024 [2024-11-29 14:26:48.596027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.978 ms 00:19:07.024 [2024-11-29 14:26:48.596039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.596154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.596167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:07.024 [2024-11-29 14:26:48.596178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:07.024 [2024-11-29 14:26:48.596187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.612543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.612592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:07.024 [2024-11-29 14:26:48.612605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.282 ms 00:19:07.024 [2024-11-29 14:26:48.612613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.612657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.612675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:07.024 [2024-11-29 14:26:48.612685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:07.024 [2024-11-29 14:26:48.612694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.613424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.613475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:07.024 [2024-11-29 14:26:48.613489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:19:07.024 [2024-11-29 14:26:48.613520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.613693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.613715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:07.024 [2024-11-29 14:26:48.613725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:19:07.024 [2024-11-29 14:26:48.613734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.623594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.623647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:07.024 [2024-11-29 14:26:48.623659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.833 ms 00:19:07.024 [2024-11-29 14:26:48.623670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.628636] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:07.024 [2024-11-29 14:26:48.628692] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:07.024 [2024-11-29 14:26:48.628706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.628716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:07.024 [2024-11-29 14:26:48.628728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.912 ms 00:19:07.024 [2024-11-29 14:26:48.628737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.645543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.645608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:07.024 [2024-11-29 14:26:48.645621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.747 ms 00:19:07.024 [2024-11-29 14:26:48.645630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.649117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.649167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:07.024 [2024-11-29 14:26:48.649179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.429 ms 00:19:07.024 [2024-11-29 14:26:48.649187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.652414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.652464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:07.024 [2024-11-29 14:26:48.652475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.174 ms 00:19:07.024 [2024-11-29 14:26:48.652483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.652881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.652899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:07.024 [2024-11-29 14:26:48.652910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:19:07.024 [2024-11-29 14:26:48.652924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.685611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.685668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:07.024 [2024-11-29 14:26:48.685682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.667 ms 00:19:07.024 [2024-11-29 14:26:48.685692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.694824] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:07.024 [2024-11-29 14:26:48.699017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.699067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:07.024 [2024-11-29 14:26:48.699079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.264 ms 00:19:07.024 [2024-11-29 14:26:48.699092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.699193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.699205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:07.024 [2024-11-29 14:26:48.699216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:07.024 [2024-11-29 14:26:48.699225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.699306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.699319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:07.024 [2024-11-29 14:26:48.699332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:07.024 [2024-11-29 14:26:48.699341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.699365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.699374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:07.024 [2024-11-29 14:26:48.699384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:07.024 [2024-11-29 14:26:48.699401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.699449] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:07.024 [2024-11-29 14:26:48.699460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.699472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:07.024 [2024-11-29 14:26:48.699481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:07.024 [2024-11-29 14:26:48.699520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.706531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.706592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:07.024 [2024-11-29 14:26:48.706604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.991 ms 00:19:07.024 [2024-11-29 14:26:48.706613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.706715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.024 [2024-11-29 14:26:48.706726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:07.024 [2024-11-29 14:26:48.706736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:07.024 [2024-11-29 14:26:48.706750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.024 [2024-11-29 14:26:48.708394] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 164.994 ms, result 0 00:19:08.409  [2024-11-29T14:26:51.138Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-29T14:26:52.080Z] Copying: 26/1024 [MB] (15 MBps) [2024-11-29T14:26:53.020Z] Copying: 37/1024 [MB] (11 MBps) [2024-11-29T14:26:53.954Z] Copying: 48/1024 [MB] (10 MBps) [2024-11-29T14:26:55.328Z] Copying: 60/1024 [MB] (12 MBps) [2024-11-29T14:26:56.259Z] Copying: 73/1024 [MB] (12 MBps) [2024-11-29T14:26:57.193Z] Copying: 85/1024 [MB] (12 MBps) [2024-11-29T14:26:58.133Z] Copying: 97/1024 [MB] (12 MBps) [2024-11-29T14:26:59.128Z] Copying: 109/1024 [MB] (12 MBps) [2024-11-29T14:27:00.083Z] Copying: 119/1024 [MB] (10 MBps) [2024-11-29T14:27:01.022Z] Copying: 130/1024 [MB] (10 MBps) [2024-11-29T14:27:01.965Z] Copying: 140/1024 [MB] (10 MBps) [2024-11-29T14:27:02.903Z] Copying: 151/1024 [MB] (10 MBps) [2024-11-29T14:27:04.285Z] Copying: 162/1024 [MB] (11 MBps) [2024-11-29T14:27:05.220Z] Copying: 172/1024 [MB] (10 MBps) [2024-11-29T14:27:06.156Z] Copying: 184/1024 [MB] (11 MBps) [2024-11-29T14:27:07.093Z] Copying: 200/1024 [MB] (16 MBps) [2024-11-29T14:27:08.027Z] Copying: 210/1024 [MB] (10 MBps) [2024-11-29T14:27:08.963Z] Copying: 222/1024 [MB] (11 MBps) [2024-11-29T14:27:09.898Z] Copying: 234/1024 [MB] (11 MBps) [2024-11-29T14:27:11.278Z] Copying: 246/1024 [MB] (12 MBps) [2024-11-29T14:27:12.217Z] Copying: 257/1024 [MB] (11 MBps) [2024-11-29T14:27:13.154Z] Copying: 269/1024 [MB] (11 MBps) [2024-11-29T14:27:14.088Z] Copying: 280/1024 [MB] (10 MBps) [2024-11-29T14:27:15.023Z] Copying: 292/1024 [MB] (12 MBps) [2024-11-29T14:27:15.957Z] Copying: 305/1024 [MB] (12 MBps) [2024-11-29T14:27:17.336Z] Copying: 317/1024 [MB] (12 MBps) [2024-11-29T14:27:17.904Z] Copying: 329/1024 [MB] (12 MBps) [2024-11-29T14:27:19.277Z] Copying: 340/1024 [MB] (11 MBps) [2024-11-29T14:27:20.211Z] Copying: 352/1024 [MB] (12 MBps) [2024-11-29T14:27:21.146Z] Copying: 364/1024 [MB] (12 MBps) [2024-11-29T14:27:22.078Z] Copying: 376/1024 [MB] (11 MBps) [2024-11-29T14:27:23.044Z] Copying: 388/1024 [MB] (12 MBps) [2024-11-29T14:27:23.979Z] Copying: 399/1024 [MB] (11 MBps) [2024-11-29T14:27:24.917Z] Copying: 411/1024 [MB] (11 MBps) [2024-11-29T14:27:26.294Z] Copying: 421/1024 [MB] (10 MBps) [2024-11-29T14:27:27.326Z] Copying: 433/1024 [MB] (11 MBps) [2024-11-29T14:27:28.261Z] Copying: 444/1024 [MB] (11 MBps) [2024-11-29T14:27:29.201Z] Copying: 456/1024 [MB] (11 MBps) [2024-11-29T14:27:30.138Z] Copying: 467/1024 [MB] (11 MBps) [2024-11-29T14:27:31.075Z] Copying: 479/1024 [MB] (11 MBps) [2024-11-29T14:27:32.017Z] Copying: 490/1024 [MB] (11 MBps) [2024-11-29T14:27:32.961Z] Copying: 501/1024 [MB] (10 MBps) [2024-11-29T14:27:34.338Z] Copying: 512/1024 [MB] (10 MBps) [2024-11-29T14:27:34.910Z] Copying: 524/1024 [MB] (11 MBps) [2024-11-29T14:27:36.288Z] Copying: 535/1024 [MB] (10 MBps) [2024-11-29T14:27:37.223Z] Copying: 546/1024 [MB] (10 MBps) [2024-11-29T14:27:38.164Z] Copying: 558/1024 [MB] (12 MBps) [2024-11-29T14:27:39.107Z] Copying: 571/1024 [MB] (13 MBps) [2024-11-29T14:27:40.047Z] Copying: 582/1024 [MB] (11 MBps) [2024-11-29T14:27:40.987Z] Copying: 592/1024 [MB] (10 MBps) [2024-11-29T14:27:41.933Z] Copying: 604/1024 [MB] (11 MBps) [2024-11-29T14:27:43.315Z] Copying: 615/1024 [MB] (10 MBps) [2024-11-29T14:27:44.250Z] Copying: 625/1024 [MB] (10 MBps) [2024-11-29T14:27:45.185Z] Copying: 637/1024 [MB] (12 MBps) [2024-11-29T14:27:46.117Z] Copying: 650/1024 [MB] (12 MBps) [2024-11-29T14:27:47.052Z] Copying: 662/1024 [MB] (12 MBps) [2024-11-29T14:27:47.991Z] Copying: 674/1024 [MB] (12 MBps) [2024-11-29T14:27:48.930Z] Copying: 685/1024 [MB] (11 MBps) [2024-11-29T14:27:50.306Z] Copying: 696/1024 [MB] (11 MBps) [2024-11-29T14:27:51.246Z] Copying: 708/1024 [MB] (12 MBps) [2024-11-29T14:27:52.190Z] Copying: 719/1024 [MB] (10 MBps) [2024-11-29T14:27:53.130Z] Copying: 732/1024 [MB] (13 MBps) [2024-11-29T14:27:54.068Z] Copying: 744/1024 [MB] (12 MBps) [2024-11-29T14:27:55.014Z] Copying: 756/1024 [MB] (11 MBps) [2024-11-29T14:27:55.973Z] Copying: 773/1024 [MB] (16 MBps) [2024-11-29T14:27:57.000Z] Copying: 790/1024 [MB] (17 MBps) [2024-11-29T14:27:57.941Z] Copying: 801/1024 [MB] (10 MBps) [2024-11-29T14:27:59.328Z] Copying: 812/1024 [MB] (11 MBps) [2024-11-29T14:28:00.264Z] Copying: 827/1024 [MB] (15 MBps) [2024-11-29T14:28:01.200Z] Copying: 838/1024 [MB] (11 MBps) [2024-11-29T14:28:02.137Z] Copying: 851/1024 [MB] (12 MBps) [2024-11-29T14:28:03.082Z] Copying: 863/1024 [MB] (12 MBps) [2024-11-29T14:28:04.023Z] Copying: 878/1024 [MB] (14 MBps) [2024-11-29T14:28:04.964Z] Copying: 893/1024 [MB] (15 MBps) [2024-11-29T14:28:05.898Z] Copying: 911/1024 [MB] (17 MBps) [2024-11-29T14:28:07.277Z] Copying: 923/1024 [MB] (12 MBps) [2024-11-29T14:28:08.218Z] Copying: 945/1024 [MB] (22 MBps) [2024-11-29T14:28:09.157Z] Copying: 957/1024 [MB] (11 MBps) [2024-11-29T14:28:10.094Z] Copying: 968/1024 [MB] (11 MBps) [2024-11-29T14:28:11.026Z] Copying: 980/1024 [MB] (11 MBps) [2024-11-29T14:28:11.960Z] Copying: 992/1024 [MB] (12 MBps) [2024-11-29T14:28:13.336Z] Copying: 1004/1024 [MB] (11 MBps) [2024-11-29T14:28:13.908Z] Copying: 1015/1024 [MB] (11 MBps) [2024-11-29T14:28:14.172Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-29 14:28:13.936410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.378 [2024-11-29 14:28:13.936567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:32.378 [2024-11-29 14:28:13.936599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:32.378 [2024-11-29 14:28:13.936627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.378 [2024-11-29 14:28:13.936670] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:32.378 [2024-11-29 14:28:13.937780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.378 [2024-11-29 14:28:13.938148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:32.378 [2024-11-29 14:28:13.938186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:20:32.378 [2024-11-29 14:28:13.938202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.378 [2024-11-29 14:28:13.938666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.378 [2024-11-29 14:28:13.938703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:32.378 [2024-11-29 14:28:13.938723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:20:32.378 [2024-11-29 14:28:13.938741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.378 [2024-11-29 14:28:13.947267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.378 [2024-11-29 14:28:13.947312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:32.378 [2024-11-29 14:28:13.947323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.490 ms 00:20:32.378 [2024-11-29 14:28:13.947332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.378 [2024-11-29 14:28:13.953557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.378 [2024-11-29 14:28:13.953600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:32.378 [2024-11-29 14:28:13.953612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.201 ms 00:20:32.378 [2024-11-29 14:28:13.953621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.378 [2024-11-29 14:28:13.956830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.378 [2024-11-29 14:28:13.957031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:32.378 [2024-11-29 14:28:13.957051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.034 ms 00:20:32.378 [2024-11-29 14:28:13.957061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.378 [2024-11-29 14:28:13.963223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.378 [2024-11-29 14:28:13.963289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:32.378 [2024-11-29 14:28:13.963301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.025 ms 00:20:32.378 [2024-11-29 14:28:13.963310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.378 [2024-11-29 14:28:13.963443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.378 [2024-11-29 14:28:13.963461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:32.378 [2024-11-29 14:28:13.963475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:32.378 [2024-11-29 14:28:13.963484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.378 [2024-11-29 14:28:13.967007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.378 [2024-11-29 14:28:13.967196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:32.378 [2024-11-29 14:28:13.967214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.486 ms 00:20:32.378 [2024-11-29 14:28:13.967222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.378 [2024-11-29 14:28:13.970274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.378 [2024-11-29 14:28:13.970448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:32.378 [2024-11-29 14:28:13.970466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:20:32.378 [2024-11-29 14:28:13.970474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.378 [2024-11-29 14:28:13.973141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.378 [2024-11-29 14:28:13.973205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:32.378 [2024-11-29 14:28:13.973219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.328 ms 00:20:32.378 [2024-11-29 14:28:13.973227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.378 [2024-11-29 14:28:13.975565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.378 [2024-11-29 14:28:13.975724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:32.378 [2024-11-29 14:28:13.975786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.258 ms 00:20:32.378 [2024-11-29 14:28:13.975810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.378 [2024-11-29 14:28:13.975857] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:32.378 [2024-11-29 14:28:13.975906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:32.378 [2024-11-29 14:28:13.975931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:32.378 [2024-11-29 14:28:13.975940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:32.378 [2024-11-29 14:28:13.975948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:32.378 [2024-11-29 14:28:13.975956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:32.378 [2024-11-29 14:28:13.975965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:32.378 [2024-11-29 14:28:13.975973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.975980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.975988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.975997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:32.379 [2024-11-29 14:28:13.976976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:32.380 [2024-11-29 14:28:13.976984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:32.380 [2024-11-29 14:28:13.976991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:32.380 [2024-11-29 14:28:13.977000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:32.380 [2024-11-29 14:28:13.977016] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:32.380 [2024-11-29 14:28:13.977026] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9176fde2-a4ae-4ddb-9d8b-b6480a765c80 00:20:32.380 [2024-11-29 14:28:13.977036] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:32.380 [2024-11-29 14:28:13.977045] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:32.380 [2024-11-29 14:28:13.977053] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:32.380 [2024-11-29 14:28:13.977062] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:32.380 [2024-11-29 14:28:13.977070] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:32.380 [2024-11-29 14:28:13.977080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:32.380 [2024-11-29 14:28:13.977088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:32.380 [2024-11-29 14:28:13.977095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:32.380 [2024-11-29 14:28:13.977101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:32.380 [2024-11-29 14:28:13.977110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.380 [2024-11-29 14:28:13.977128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:32.380 [2024-11-29 14:28:13.977137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:20:32.380 [2024-11-29 14:28:13.977151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:13.980610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.380 [2024-11-29 14:28:13.980766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:32.380 [2024-11-29 14:28:13.980823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.429 ms 00:20:32.380 [2024-11-29 14:28:13.980847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:13.981023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.380 [2024-11-29 14:28:13.981135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:32.380 [2024-11-29 14:28:13.981332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:20:32.380 [2024-11-29 14:28:13.981778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:13.991152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.380 [2024-11-29 14:28:13.991317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.380 [2024-11-29 14:28:13.991377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.380 [2024-11-29 14:28:13.991400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:13.991488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.380 [2024-11-29 14:28:13.991534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.380 [2024-11-29 14:28:13.991555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.380 [2024-11-29 14:28:13.991575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:13.991669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.380 [2024-11-29 14:28:13.991844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.380 [2024-11-29 14:28:13.991870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.380 [2024-11-29 14:28:13.991891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:13.991925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.380 [2024-11-29 14:28:13.991955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.380 [2024-11-29 14:28:13.991977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.380 [2024-11-29 14:28:13.992347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:14.011803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.380 [2024-11-29 14:28:14.011866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:32.380 [2024-11-29 14:28:14.011884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.380 [2024-11-29 14:28:14.011893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:14.027530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.380 [2024-11-29 14:28:14.027594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:32.380 [2024-11-29 14:28:14.027610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.380 [2024-11-29 14:28:14.027618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:14.027694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.380 [2024-11-29 14:28:14.027705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:32.380 [2024-11-29 14:28:14.027715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.380 [2024-11-29 14:28:14.027723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:14.027762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.380 [2024-11-29 14:28:14.027772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:32.380 [2024-11-29 14:28:14.027787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.380 [2024-11-29 14:28:14.027797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:14.027881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.380 [2024-11-29 14:28:14.027893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:32.380 [2024-11-29 14:28:14.027903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.380 [2024-11-29 14:28:14.027911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:14.027943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.380 [2024-11-29 14:28:14.027954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:32.380 [2024-11-29 14:28:14.027964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.380 [2024-11-29 14:28:14.027977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:14.028034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.380 [2024-11-29 14:28:14.028046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:32.380 [2024-11-29 14:28:14.028057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.380 [2024-11-29 14:28:14.028066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:14.028126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.380 [2024-11-29 14:28:14.028140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:32.380 [2024-11-29 14:28:14.028155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.380 [2024-11-29 14:28:14.028167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.380 [2024-11-29 14:28:14.028335] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 91.899 ms, result 0 00:20:32.641 00:20:32.641 00:20:32.641 14:28:14 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:35.185 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:35.185 14:28:16 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:35.185 [2024-11-29 14:28:16.468078] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:35.185 [2024-11-29 14:28:16.468208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87931 ] 00:20:35.185 [2024-11-29 14:28:16.617923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.185 [2024-11-29 14:28:16.667916] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.185 [2024-11-29 14:28:16.757237] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:35.185 [2024-11-29 14:28:16.757303] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:35.185 [2024-11-29 14:28:16.914709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.185 [2024-11-29 14:28:16.914877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:35.185 [2024-11-29 14:28:16.914903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:35.185 [2024-11-29 14:28:16.914911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.185 [2024-11-29 14:28:16.914977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.185 [2024-11-29 14:28:16.914988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:35.185 [2024-11-29 14:28:16.915000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:35.185 [2024-11-29 14:28:16.915013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.185 [2024-11-29 14:28:16.915032] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:35.185 [2024-11-29 14:28:16.915274] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:35.185 [2024-11-29 14:28:16.915289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.185 [2024-11-29 14:28:16.915301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:35.185 [2024-11-29 14:28:16.915311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:20:35.185 [2024-11-29 14:28:16.915321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.185 [2024-11-29 14:28:16.916550] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:35.185 [2024-11-29 14:28:16.919459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.185 [2024-11-29 14:28:16.919511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:35.185 [2024-11-29 14:28:16.919522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.911 ms 00:20:35.185 [2024-11-29 14:28:16.919530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.186 [2024-11-29 14:28:16.919604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.186 [2024-11-29 14:28:16.919614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:35.186 [2024-11-29 14:28:16.919622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:35.186 [2024-11-29 14:28:16.919632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.186 [2024-11-29 14:28:16.925800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.186 [2024-11-29 14:28:16.925948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:35.186 [2024-11-29 14:28:16.925972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.106 ms 00:20:35.186 [2024-11-29 14:28:16.925982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.186 [2024-11-29 14:28:16.926082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.186 [2024-11-29 14:28:16.926092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:35.186 [2024-11-29 14:28:16.926100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:35.186 [2024-11-29 14:28:16.926107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.186 [2024-11-29 14:28:16.926145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.186 [2024-11-29 14:28:16.926157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:35.186 [2024-11-29 14:28:16.926165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:35.186 [2024-11-29 14:28:16.926172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.186 [2024-11-29 14:28:16.926196] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:35.186 [2024-11-29 14:28:16.927882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.186 [2024-11-29 14:28:16.927912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:35.186 [2024-11-29 14:28:16.927922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.693 ms 00:20:35.186 [2024-11-29 14:28:16.927929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.186 [2024-11-29 14:28:16.927958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.186 [2024-11-29 14:28:16.927966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:35.186 [2024-11-29 14:28:16.927973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:35.186 [2024-11-29 14:28:16.927980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.186 [2024-11-29 14:28:16.928002] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:35.186 [2024-11-29 14:28:16.928025] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:35.186 [2024-11-29 14:28:16.928061] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:35.186 [2024-11-29 14:28:16.928077] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:35.186 [2024-11-29 14:28:16.928180] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:35.186 [2024-11-29 14:28:16.928191] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:35.186 [2024-11-29 14:28:16.928201] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:35.186 [2024-11-29 14:28:16.928215] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:35.186 [2024-11-29 14:28:16.928227] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:35.186 [2024-11-29 14:28:16.928235] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:35.186 [2024-11-29 14:28:16.928245] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:35.186 [2024-11-29 14:28:16.928255] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:35.186 [2024-11-29 14:28:16.928263] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:35.186 [2024-11-29 14:28:16.928271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.186 [2024-11-29 14:28:16.928278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:35.186 [2024-11-29 14:28:16.928286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:20:35.186 [2024-11-29 14:28:16.928293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.186 [2024-11-29 14:28:16.928387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.186 [2024-11-29 14:28:16.928399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:35.186 [2024-11-29 14:28:16.928409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:35.186 [2024-11-29 14:28:16.928416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.186 [2024-11-29 14:28:16.928530] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:35.186 [2024-11-29 14:28:16.928543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:35.186 [2024-11-29 14:28:16.928552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.186 [2024-11-29 14:28:16.928566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.186 [2024-11-29 14:28:16.928575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:35.186 [2024-11-29 14:28:16.928583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:35.186 [2024-11-29 14:28:16.928591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:35.186 [2024-11-29 14:28:16.928600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:35.186 [2024-11-29 14:28:16.928609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:35.186 [2024-11-29 14:28:16.928617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.186 [2024-11-29 14:28:16.928625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:35.186 [2024-11-29 14:28:16.928633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:35.186 [2024-11-29 14:28:16.928642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.186 [2024-11-29 14:28:16.928650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:35.186 [2024-11-29 14:28:16.928658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:35.186 [2024-11-29 14:28:16.928666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.186 [2024-11-29 14:28:16.928673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:35.186 [2024-11-29 14:28:16.928681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:35.186 [2024-11-29 14:28:16.928691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.186 [2024-11-29 14:28:16.928699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:35.186 [2024-11-29 14:28:16.928707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:35.186 [2024-11-29 14:28:16.928715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.186 [2024-11-29 14:28:16.928722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:35.186 [2024-11-29 14:28:16.928730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:35.186 [2024-11-29 14:28:16.928738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.186 [2024-11-29 14:28:16.928745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:35.186 [2024-11-29 14:28:16.928753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:35.186 [2024-11-29 14:28:16.928761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.186 [2024-11-29 14:28:16.928775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:35.186 [2024-11-29 14:28:16.928783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:35.186 [2024-11-29 14:28:16.928791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.186 [2024-11-29 14:28:16.928798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:35.186 [2024-11-29 14:28:16.928806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:35.186 [2024-11-29 14:28:16.928814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.186 [2024-11-29 14:28:16.928821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:35.186 [2024-11-29 14:28:16.928828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:35.186 [2024-11-29 14:28:16.928835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.186 [2024-11-29 14:28:16.928843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:35.186 [2024-11-29 14:28:16.928851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:35.186 [2024-11-29 14:28:16.928859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.186 [2024-11-29 14:28:16.928867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:35.186 [2024-11-29 14:28:16.928874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:35.186 [2024-11-29 14:28:16.928882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.186 [2024-11-29 14:28:16.928890] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:35.186 [2024-11-29 14:28:16.928900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:35.186 [2024-11-29 14:28:16.928909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.186 [2024-11-29 14:28:16.928919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.186 [2024-11-29 14:28:16.928928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:35.186 [2024-11-29 14:28:16.928935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:35.186 [2024-11-29 14:28:16.928941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:35.186 [2024-11-29 14:28:16.928949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:35.186 [2024-11-29 14:28:16.928956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:35.186 [2024-11-29 14:28:16.928963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:35.186 [2024-11-29 14:28:16.928973] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:35.186 [2024-11-29 14:28:16.928982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.186 [2024-11-29 14:28:16.928990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:35.186 [2024-11-29 14:28:16.928997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:35.186 [2024-11-29 14:28:16.929005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:35.187 [2024-11-29 14:28:16.929012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:35.187 [2024-11-29 14:28:16.929019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:35.187 [2024-11-29 14:28:16.929028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:35.187 [2024-11-29 14:28:16.929035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:35.187 [2024-11-29 14:28:16.929042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:35.187 [2024-11-29 14:28:16.929050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:35.187 [2024-11-29 14:28:16.929057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:35.187 [2024-11-29 14:28:16.929064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:35.187 [2024-11-29 14:28:16.929071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:35.187 [2024-11-29 14:28:16.929078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:35.187 [2024-11-29 14:28:16.929086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:35.187 [2024-11-29 14:28:16.929093] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:35.187 [2024-11-29 14:28:16.929101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.187 [2024-11-29 14:28:16.929113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:35.187 [2024-11-29 14:28:16.929120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:35.187 [2024-11-29 14:28:16.929128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:35.187 [2024-11-29 14:28:16.929135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:35.187 [2024-11-29 14:28:16.929142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-11-29 14:28:16.929151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:35.187 [2024-11-29 14:28:16.929159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:20:35.187 [2024-11-29 14:28:16.929166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-11-29 14:28:16.954196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-11-29 14:28:16.954431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:35.187 [2024-11-29 14:28:16.955042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.984 ms 00:20:35.187 [2024-11-29 14:28:16.955127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-11-29 14:28:16.955385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-11-29 14:28:16.955507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:35.187 [2024-11-29 14:28:16.955591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:20:35.187 [2024-11-29 14:28:16.955634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-11-29 14:28:16.966390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-11-29 14:28:16.966550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.187 [2024-11-29 14:28:16.966604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.633 ms 00:20:35.187 [2024-11-29 14:28:16.966635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-11-29 14:28:16.966685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-11-29 14:28:16.966708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:35.187 [2024-11-29 14:28:16.966728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:35.187 [2024-11-29 14:28:16.966748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-11-29 14:28:16.967260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-11-29 14:28:16.967329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:35.187 [2024-11-29 14:28:16.967351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:20:35.187 [2024-11-29 14:28:16.967375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-11-29 14:28:16.967540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-11-29 14:28:16.967565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:35.187 [2024-11-29 14:28:16.967585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:20:35.187 [2024-11-29 14:28:16.967604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-11-29 14:28:16.973889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-11-29 14:28:16.974027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:35.187 [2024-11-29 14:28:16.974054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.201 ms 00:20:35.187 [2024-11-29 14:28:16.974062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.447 [2024-11-29 14:28:16.977621] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:35.447 [2024-11-29 14:28:16.977667] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:35.447 [2024-11-29 14:28:16.977683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.447 [2024-11-29 14:28:16.977692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:35.447 [2024-11-29 14:28:16.977701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.529 ms 00:20:35.447 [2024-11-29 14:28:16.977708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.447 [2024-11-29 14:28:16.993184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.447 [2024-11-29 14:28:16.993340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:35.447 [2024-11-29 14:28:16.993370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.430 ms 00:20:35.447 [2024-11-29 14:28:16.993378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.448 [2024-11-29 14:28:16.995927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.448 [2024-11-29 14:28:16.995970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:35.448 [2024-11-29 14:28:16.995980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:20:35.448 [2024-11-29 14:28:16.995987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.448 [2024-11-29 14:28:16.998314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.448 [2024-11-29 14:28:16.998354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:35.448 [2024-11-29 14:28:16.998363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.285 ms 00:20:35.448 [2024-11-29 14:28:16.998370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.448 [2024-11-29 14:28:16.998743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.448 [2024-11-29 14:28:16.998758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:35.448 [2024-11-29 14:28:16.998768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:20:35.448 [2024-11-29 14:28:16.998776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.448 [2024-11-29 14:28:17.019772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.448 [2024-11-29 14:28:17.019983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:35.448 [2024-11-29 14:28:17.020003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.978 ms 00:20:35.448 [2024-11-29 14:28:17.020019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.448 [2024-11-29 14:28:17.028152] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:35.448 [2024-11-29 14:28:17.031076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.448 [2024-11-29 14:28:17.031219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:35.448 [2024-11-29 14:28:17.031247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.015 ms 00:20:35.448 [2024-11-29 14:28:17.031255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.448 [2024-11-29 14:28:17.031335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.448 [2024-11-29 14:28:17.031346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:35.448 [2024-11-29 14:28:17.031355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:35.448 [2024-11-29 14:28:17.031368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.448 [2024-11-29 14:28:17.031445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.448 [2024-11-29 14:28:17.031455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:35.448 [2024-11-29 14:28:17.031464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:35.448 [2024-11-29 14:28:17.031474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.448 [2024-11-29 14:28:17.031517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.448 [2024-11-29 14:28:17.031527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:35.448 [2024-11-29 14:28:17.031536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:35.448 [2024-11-29 14:28:17.031544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.448 [2024-11-29 14:28:17.031585] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:35.448 [2024-11-29 14:28:17.031596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.448 [2024-11-29 14:28:17.031606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:35.448 [2024-11-29 14:28:17.031615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:35.448 [2024-11-29 14:28:17.031623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.448 [2024-11-29 14:28:17.036950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.448 [2024-11-29 14:28:17.036998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:35.448 [2024-11-29 14:28:17.037009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.305 ms 00:20:35.448 [2024-11-29 14:28:17.037018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.448 [2024-11-29 14:28:17.037107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.448 [2024-11-29 14:28:17.037118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:35.448 [2024-11-29 14:28:17.037127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:35.448 [2024-11-29 14:28:17.037136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.448 [2024-11-29 14:28:17.038281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 123.113 ms, result 0 00:20:36.392  [2024-11-29T14:28:19.132Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-29T14:28:20.077Z] Copying: 34/1024 [MB] (19 MBps) [2024-11-29T14:28:21.461Z] Copying: 49/1024 [MB] (14 MBps) [2024-11-29T14:28:22.403Z] Copying: 66/1024 [MB] (16 MBps) [2024-11-29T14:28:23.339Z] Copying: 81/1024 [MB] (15 MBps) [2024-11-29T14:28:24.275Z] Copying: 104/1024 [MB] (22 MBps) [2024-11-29T14:28:25.276Z] Copying: 116/1024 [MB] (12 MBps) [2024-11-29T14:28:26.214Z] Copying: 128/1024 [MB] (11 MBps) [2024-11-29T14:28:27.149Z] Copying: 150/1024 [MB] (22 MBps) [2024-11-29T14:28:28.081Z] Copying: 174/1024 [MB] (23 MBps) [2024-11-29T14:28:29.460Z] Copying: 195/1024 [MB] (21 MBps) [2024-11-29T14:28:30.392Z] Copying: 207/1024 [MB] (11 MBps) [2024-11-29T14:28:31.324Z] Copying: 219/1024 [MB] (11 MBps) [2024-11-29T14:28:32.261Z] Copying: 230/1024 [MB] (11 MBps) [2024-11-29T14:28:33.204Z] Copying: 247/1024 [MB] (17 MBps) [2024-11-29T14:28:34.145Z] Copying: 258/1024 [MB] (10 MBps) [2024-11-29T14:28:35.082Z] Copying: 269/1024 [MB] (10 MBps) [2024-11-29T14:28:36.477Z] Copying: 285/1024 [MB] (15 MBps) [2024-11-29T14:28:37.417Z] Copying: 295/1024 [MB] (10 MBps) [2024-11-29T14:28:38.355Z] Copying: 305/1024 [MB] (10 MBps) [2024-11-29T14:28:39.298Z] Copying: 317/1024 [MB] (11 MBps) [2024-11-29T14:28:40.240Z] Copying: 327/1024 [MB] (10 MBps) [2024-11-29T14:28:41.184Z] Copying: 351/1024 [MB] (23 MBps) [2024-11-29T14:28:42.128Z] Copying: 404/1024 [MB] (52 MBps) [2024-11-29T14:28:43.073Z] Copying: 457/1024 [MB] (53 MBps) [2024-11-29T14:28:44.458Z] Copying: 491/1024 [MB] (34 MBps) [2024-11-29T14:28:45.401Z] Copying: 507/1024 [MB] (15 MBps) [2024-11-29T14:28:46.344Z] Copying: 526/1024 [MB] (19 MBps) [2024-11-29T14:28:47.289Z] Copying: 544/1024 [MB] (18 MBps) [2024-11-29T14:28:48.233Z] Copying: 565/1024 [MB] (20 MBps) [2024-11-29T14:28:49.178Z] Copying: 575/1024 [MB] (10 MBps) [2024-11-29T14:28:50.124Z] Copying: 586/1024 [MB] (10 MBps) [2024-11-29T14:28:51.066Z] Copying: 601/1024 [MB] (15 MBps) [2024-11-29T14:28:52.455Z] Copying: 617/1024 [MB] (16 MBps) [2024-11-29T14:28:53.397Z] Copying: 635/1024 [MB] (18 MBps) [2024-11-29T14:28:54.111Z] Copying: 653/1024 [MB] (17 MBps) [2024-11-29T14:28:55.056Z] Copying: 669/1024 [MB] (15 MBps) [2024-11-29T14:28:56.445Z] Copying: 683/1024 [MB] (14 MBps) [2024-11-29T14:28:57.390Z] Copying: 693/1024 [MB] (10 MBps) [2024-11-29T14:28:58.337Z] Copying: 703/1024 [MB] (10 MBps) [2024-11-29T14:28:59.278Z] Copying: 714/1024 [MB] (10 MBps) [2024-11-29T14:29:00.221Z] Copying: 724/1024 [MB] (10 MBps) [2024-11-29T14:29:01.165Z] Copying: 734/1024 [MB] (10 MBps) [2024-11-29T14:29:02.110Z] Copying: 782/1024 [MB] (47 MBps) [2024-11-29T14:29:03.050Z] Copying: 811/1024 [MB] (29 MBps) [2024-11-29T14:29:04.435Z] Copying: 829/1024 [MB] (17 MBps) [2024-11-29T14:29:05.380Z] Copying: 847/1024 [MB] (17 MBps) [2024-11-29T14:29:06.324Z] Copying: 861/1024 [MB] (14 MBps) [2024-11-29T14:29:07.268Z] Copying: 874/1024 [MB] (12 MBps) [2024-11-29T14:29:08.211Z] Copying: 889/1024 [MB] (15 MBps) [2024-11-29T14:29:09.155Z] Copying: 903/1024 [MB] (13 MBps) [2024-11-29T14:29:10.100Z] Copying: 921/1024 [MB] (17 MBps) [2024-11-29T14:29:11.488Z] Copying: 931/1024 [MB] (10 MBps) [2024-11-29T14:29:12.061Z] Copying: 948/1024 [MB] (17 MBps) [2024-11-29T14:29:13.448Z] Copying: 966/1024 [MB] (17 MBps) [2024-11-29T14:29:14.390Z] Copying: 981/1024 [MB] (14 MBps) [2024-11-29T14:29:15.332Z] Copying: 991/1024 [MB] (10 MBps) [2024-11-29T14:29:16.274Z] Copying: 1002/1024 [MB] (10 MBps) [2024-11-29T14:29:17.217Z] Copying: 1012/1024 [MB] (10 MBps) [2024-11-29T14:29:18.161Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-29T14:29:18.161Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-29 14:29:17.962523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.367 [2024-11-29 14:29:17.963023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:36.367 [2024-11-29 14:29:17.963055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:36.367 [2024-11-29 14:29:17.963065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.368 [2024-11-29 14:29:17.964673] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:36.368 [2024-11-29 14:29:17.966311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.368 [2024-11-29 14:29:17.966487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:36.368 [2024-11-29 14:29:17.966523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.590 ms 00:21:36.368 [2024-11-29 14:29:17.966532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.368 [2024-11-29 14:29:17.978506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.368 [2024-11-29 14:29:17.978555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:36.368 [2024-11-29 14:29:17.978568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.964 ms 00:21:36.368 [2024-11-29 14:29:17.978578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.368 [2024-11-29 14:29:18.003607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.368 [2024-11-29 14:29:18.003673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:36.368 [2024-11-29 14:29:18.003686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.011 ms 00:21:36.368 [2024-11-29 14:29:18.003697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.368 [2024-11-29 14:29:18.009845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.368 [2024-11-29 14:29:18.009898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:36.368 [2024-11-29 14:29:18.009909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.109 ms 00:21:36.368 [2024-11-29 14:29:18.009917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.368 [2024-11-29 14:29:18.012785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.368 [2024-11-29 14:29:18.012957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:36.368 [2024-11-29 14:29:18.012976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.809 ms 00:21:36.368 [2024-11-29 14:29:18.012984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.368 [2024-11-29 14:29:18.017785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.368 [2024-11-29 14:29:18.017834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:36.368 [2024-11-29 14:29:18.017846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.762 ms 00:21:36.368 [2024-11-29 14:29:18.017854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.631 [2024-11-29 14:29:18.286164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.631 [2024-11-29 14:29:18.286237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:36.631 [2024-11-29 14:29:18.286251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 268.255 ms 00:21:36.631 [2024-11-29 14:29:18.286262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.631 [2024-11-29 14:29:18.289456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.631 [2024-11-29 14:29:18.289517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:36.631 [2024-11-29 14:29:18.289527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.176 ms 00:21:36.631 [2024-11-29 14:29:18.289535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.631 [2024-11-29 14:29:18.292481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.631 [2024-11-29 14:29:18.292535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:36.631 [2024-11-29 14:29:18.292546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.888 ms 00:21:36.631 [2024-11-29 14:29:18.292553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.631 [2024-11-29 14:29:18.294875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.631 [2024-11-29 14:29:18.294923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:36.631 [2024-11-29 14:29:18.294933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.280 ms 00:21:36.631 [2024-11-29 14:29:18.294940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.631 [2024-11-29 14:29:18.297296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.631 [2024-11-29 14:29:18.297343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:36.631 [2024-11-29 14:29:18.297352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.275 ms 00:21:36.631 [2024-11-29 14:29:18.297360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.631 [2024-11-29 14:29:18.297398] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:36.631 [2024-11-29 14:29:18.297413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 97024 / 261120 wr_cnt: 1 state: open 00:21:36.631 [2024-11-29 14:29:18.297424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:36.631 [2024-11-29 14:29:18.297777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.297997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:36.632 [2024-11-29 14:29:18.298242] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:36.632 [2024-11-29 14:29:18.298250] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9176fde2-a4ae-4ddb-9d8b-b6480a765c80 00:21:36.632 [2024-11-29 14:29:18.298259] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 97024 00:21:36.632 [2024-11-29 14:29:18.298267] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 97984 00:21:36.632 [2024-11-29 14:29:18.298275] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 97024 00:21:36.632 [2024-11-29 14:29:18.298290] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0099 00:21:36.632 [2024-11-29 14:29:18.298298] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:36.632 [2024-11-29 14:29:18.298307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:36.632 [2024-11-29 14:29:18.298322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:36.632 [2024-11-29 14:29:18.298329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:36.632 [2024-11-29 14:29:18.298335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:36.632 [2024-11-29 14:29:18.298343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.632 [2024-11-29 14:29:18.298352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:36.632 [2024-11-29 14:29:18.298361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:21:36.632 [2024-11-29 14:29:18.298370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.632 [2024-11-29 14:29:18.300679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.632 [2024-11-29 14:29:18.300719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:36.632 [2024-11-29 14:29:18.300731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.282 ms 00:21:36.632 [2024-11-29 14:29:18.300740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.632 [2024-11-29 14:29:18.300865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.632 [2024-11-29 14:29:18.300875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:36.632 [2024-11-29 14:29:18.300884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:21:36.632 [2024-11-29 14:29:18.300892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.632 [2024-11-29 14:29:18.307708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.632 [2024-11-29 14:29:18.307754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:36.632 [2024-11-29 14:29:18.307764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.632 [2024-11-29 14:29:18.307772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.632 [2024-11-29 14:29:18.307827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.632 [2024-11-29 14:29:18.307836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:36.633 [2024-11-29 14:29:18.307844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.633 [2024-11-29 14:29:18.307852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.633 [2024-11-29 14:29:18.307900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.633 [2024-11-29 14:29:18.307916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:36.633 [2024-11-29 14:29:18.307924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.633 [2024-11-29 14:29:18.307932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.633 [2024-11-29 14:29:18.307947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.633 [2024-11-29 14:29:18.307955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:36.633 [2024-11-29 14:29:18.307969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.633 [2024-11-29 14:29:18.307976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.633 [2024-11-29 14:29:18.321752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.633 [2024-11-29 14:29:18.321966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:36.633 [2024-11-29 14:29:18.321986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.633 [2024-11-29 14:29:18.321996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.633 [2024-11-29 14:29:18.332467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.633 [2024-11-29 14:29:18.332539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:36.633 [2024-11-29 14:29:18.332552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.633 [2024-11-29 14:29:18.332561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.633 [2024-11-29 14:29:18.332610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.633 [2024-11-29 14:29:18.332619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:36.633 [2024-11-29 14:29:18.332666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.633 [2024-11-29 14:29:18.332675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.633 [2024-11-29 14:29:18.332735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.633 [2024-11-29 14:29:18.332746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:36.633 [2024-11-29 14:29:18.332755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.633 [2024-11-29 14:29:18.332763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.633 [2024-11-29 14:29:18.332833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.633 [2024-11-29 14:29:18.332844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:36.633 [2024-11-29 14:29:18.332856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.633 [2024-11-29 14:29:18.332864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.633 [2024-11-29 14:29:18.332898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.633 [2024-11-29 14:29:18.332907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:36.633 [2024-11-29 14:29:18.332916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.633 [2024-11-29 14:29:18.332923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.633 [2024-11-29 14:29:18.332962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.633 [2024-11-29 14:29:18.332971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:36.633 [2024-11-29 14:29:18.332980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.633 [2024-11-29 14:29:18.332990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.633 [2024-11-29 14:29:18.333037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:36.633 [2024-11-29 14:29:18.333047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:36.633 [2024-11-29 14:29:18.333056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:36.633 [2024-11-29 14:29:18.333064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.633 [2024-11-29 14:29:18.333203] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.710 ms, result 0 00:21:37.577 00:21:37.577 00:21:37.577 14:29:19 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:37.577 [2024-11-29 14:29:19.211041] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:37.577 [2024-11-29 14:29:19.211186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88583 ] 00:21:37.577 [2024-11-29 14:29:19.363212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.839 [2024-11-29 14:29:19.414000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.839 [2024-11-29 14:29:19.528958] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:37.839 [2024-11-29 14:29:19.529047] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:38.102 [2024-11-29 14:29:19.689984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.102 [2024-11-29 14:29:19.690045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:38.102 [2024-11-29 14:29:19.690063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:38.102 [2024-11-29 14:29:19.690076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.102 [2024-11-29 14:29:19.690142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.102 [2024-11-29 14:29:19.690154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:38.102 [2024-11-29 14:29:19.690163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:38.102 [2024-11-29 14:29:19.690171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.102 [2024-11-29 14:29:19.690193] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:38.102 [2024-11-29 14:29:19.690473] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:38.102 [2024-11-29 14:29:19.690523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.102 [2024-11-29 14:29:19.690533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:38.102 [2024-11-29 14:29:19.690543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:21:38.102 [2024-11-29 14:29:19.690554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.102 [2024-11-29 14:29:19.692335] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:38.102 [2024-11-29 14:29:19.696142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.102 [2024-11-29 14:29:19.696193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:38.102 [2024-11-29 14:29:19.696204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.809 ms 00:21:38.102 [2024-11-29 14:29:19.696221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.102 [2024-11-29 14:29:19.696298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.102 [2024-11-29 14:29:19.696308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:38.102 [2024-11-29 14:29:19.696322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:38.102 [2024-11-29 14:29:19.696329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.102 [2024-11-29 14:29:19.704364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.102 [2024-11-29 14:29:19.704405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:38.102 [2024-11-29 14:29:19.704416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.987 ms 00:21:38.102 [2024-11-29 14:29:19.704428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.102 [2024-11-29 14:29:19.704552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.102 [2024-11-29 14:29:19.704569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:38.102 [2024-11-29 14:29:19.704578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:21:38.102 [2024-11-29 14:29:19.704586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.102 [2024-11-29 14:29:19.704643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.102 [2024-11-29 14:29:19.704653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:38.102 [2024-11-29 14:29:19.704662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:38.102 [2024-11-29 14:29:19.704679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.102 [2024-11-29 14:29:19.704708] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:38.102 [2024-11-29 14:29:19.706746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.102 [2024-11-29 14:29:19.706784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:38.102 [2024-11-29 14:29:19.706795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.046 ms 00:21:38.102 [2024-11-29 14:29:19.706803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.102 [2024-11-29 14:29:19.706840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.102 [2024-11-29 14:29:19.706848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:38.102 [2024-11-29 14:29:19.706857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:38.102 [2024-11-29 14:29:19.706865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.102 [2024-11-29 14:29:19.706887] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:38.102 [2024-11-29 14:29:19.706912] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:38.102 [2024-11-29 14:29:19.706950] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:38.102 [2024-11-29 14:29:19.706982] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:38.102 [2024-11-29 14:29:19.707091] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:38.102 [2024-11-29 14:29:19.707102] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:38.102 [2024-11-29 14:29:19.707113] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:38.102 [2024-11-29 14:29:19.707127] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:38.102 [2024-11-29 14:29:19.707140] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:38.102 [2024-11-29 14:29:19.707152] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:38.102 [2024-11-29 14:29:19.707160] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:38.102 [2024-11-29 14:29:19.707171] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:38.102 [2024-11-29 14:29:19.707178] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:38.102 [2024-11-29 14:29:19.707193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.102 [2024-11-29 14:29:19.707205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:38.102 [2024-11-29 14:29:19.707213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:21:38.102 [2024-11-29 14:29:19.707220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.102 [2024-11-29 14:29:19.707306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.102 [2024-11-29 14:29:19.707324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:38.102 [2024-11-29 14:29:19.707331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:38.102 [2024-11-29 14:29:19.707344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.102 [2024-11-29 14:29:19.707445] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:38.102 [2024-11-29 14:29:19.707456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:38.102 [2024-11-29 14:29:19.707465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:38.102 [2024-11-29 14:29:19.707482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.102 [2024-11-29 14:29:19.707507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:38.102 [2024-11-29 14:29:19.707515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:38.102 [2024-11-29 14:29:19.707523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:38.102 [2024-11-29 14:29:19.707532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:38.102 [2024-11-29 14:29:19.707541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:38.102 [2024-11-29 14:29:19.707549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:38.102 [2024-11-29 14:29:19.707558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:38.102 [2024-11-29 14:29:19.707565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:38.103 [2024-11-29 14:29:19.707573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:38.103 [2024-11-29 14:29:19.707581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:38.103 [2024-11-29 14:29:19.707589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:38.103 [2024-11-29 14:29:19.707602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.103 [2024-11-29 14:29:19.707610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:38.103 [2024-11-29 14:29:19.707618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:38.103 [2024-11-29 14:29:19.707626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.103 [2024-11-29 14:29:19.707634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:38.103 [2024-11-29 14:29:19.707644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:38.103 [2024-11-29 14:29:19.707652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.103 [2024-11-29 14:29:19.707660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:38.103 [2024-11-29 14:29:19.707668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:38.103 [2024-11-29 14:29:19.707676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.103 [2024-11-29 14:29:19.707685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:38.103 [2024-11-29 14:29:19.707693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:38.103 [2024-11-29 14:29:19.707701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.103 [2024-11-29 14:29:19.707709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:38.103 [2024-11-29 14:29:19.707717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:38.103 [2024-11-29 14:29:19.707725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.103 [2024-11-29 14:29:19.707735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:38.103 [2024-11-29 14:29:19.707743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:38.103 [2024-11-29 14:29:19.707751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:38.103 [2024-11-29 14:29:19.707759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:38.103 [2024-11-29 14:29:19.707766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:38.103 [2024-11-29 14:29:19.707772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:38.103 [2024-11-29 14:29:19.707779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:38.103 [2024-11-29 14:29:19.707786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:38.103 [2024-11-29 14:29:19.707794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.103 [2024-11-29 14:29:19.707800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:38.103 [2024-11-29 14:29:19.707807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:38.103 [2024-11-29 14:29:19.707814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.103 [2024-11-29 14:29:19.707820] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:38.103 [2024-11-29 14:29:19.707833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:38.103 [2024-11-29 14:29:19.707840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:38.103 [2024-11-29 14:29:19.707850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.103 [2024-11-29 14:29:19.707863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:38.103 [2024-11-29 14:29:19.707870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:38.103 [2024-11-29 14:29:19.707877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:38.103 [2024-11-29 14:29:19.707884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:38.103 [2024-11-29 14:29:19.707890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:38.103 [2024-11-29 14:29:19.707898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:38.103 [2024-11-29 14:29:19.707907] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:38.103 [2024-11-29 14:29:19.707917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:38.103 [2024-11-29 14:29:19.707926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:38.103 [2024-11-29 14:29:19.707934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:38.103 [2024-11-29 14:29:19.707943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:38.103 [2024-11-29 14:29:19.707951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:38.103 [2024-11-29 14:29:19.707958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:38.103 [2024-11-29 14:29:19.707965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:38.103 [2024-11-29 14:29:19.707973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:38.103 [2024-11-29 14:29:19.707980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:38.103 [2024-11-29 14:29:19.707989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:38.103 [2024-11-29 14:29:19.707997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:38.103 [2024-11-29 14:29:19.708004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:38.103 [2024-11-29 14:29:19.708011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:38.103 [2024-11-29 14:29:19.708018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:38.103 [2024-11-29 14:29:19.708025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:38.103 [2024-11-29 14:29:19.708032] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:38.103 [2024-11-29 14:29:19.708041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:38.103 [2024-11-29 14:29:19.708054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:38.103 [2024-11-29 14:29:19.708061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:38.103 [2024-11-29 14:29:19.708068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:38.103 [2024-11-29 14:29:19.708075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:38.103 [2024-11-29 14:29:19.708083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.103 [2024-11-29 14:29:19.708091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:38.103 [2024-11-29 14:29:19.708098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:21:38.103 [2024-11-29 14:29:19.708106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.103 [2024-11-29 14:29:19.732817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.103 [2024-11-29 14:29:19.733064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:38.103 [2024-11-29 14:29:19.733167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.665 ms 00:21:38.103 [2024-11-29 14:29:19.733207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.103 [2024-11-29 14:29:19.733392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.103 [2024-11-29 14:29:19.733553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:38.104 [2024-11-29 14:29:19.733648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:38.104 [2024-11-29 14:29:19.733685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.745776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.745932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:38.104 [2024-11-29 14:29:19.745988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.975 ms 00:21:38.104 [2024-11-29 14:29:19.746011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.746060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.746081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:38.104 [2024-11-29 14:29:19.746101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:38.104 [2024-11-29 14:29:19.746120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.746723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.746865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:38.104 [2024-11-29 14:29:19.746937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:21:38.104 [2024-11-29 14:29:19.746960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.747572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.747689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:38.104 [2024-11-29 14:29:19.747746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:21:38.104 [2024-11-29 14:29:19.747769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.754731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.754884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:38.104 [2024-11-29 14:29:19.754950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.894 ms 00:21:38.104 [2024-11-29 14:29:19.754988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.758946] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:38.104 [2024-11-29 14:29:19.759127] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:38.104 [2024-11-29 14:29:19.759193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.759214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:38.104 [2024-11-29 14:29:19.759241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.008 ms 00:21:38.104 [2024-11-29 14:29:19.759261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.775003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.775180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:38.104 [2024-11-29 14:29:19.775251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.689 ms 00:21:38.104 [2024-11-29 14:29:19.775275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.778101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.778247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:38.104 [2024-11-29 14:29:19.778299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.771 ms 00:21:38.104 [2024-11-29 14:29:19.778321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.781053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.781211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:38.104 [2024-11-29 14:29:19.781263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.612 ms 00:21:38.104 [2024-11-29 14:29:19.781284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.781739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.781876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:38.104 [2024-11-29 14:29:19.781954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:21:38.104 [2024-11-29 14:29:19.781981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.806126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.806322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:38.104 [2024-11-29 14:29:19.806385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.016 ms 00:21:38.104 [2024-11-29 14:29:19.806409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.814801] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:38.104 [2024-11-29 14:29:19.818031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.818168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:38.104 [2024-11-29 14:29:19.818232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.475 ms 00:21:38.104 [2024-11-29 14:29:19.818254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.818347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.818374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:38.104 [2024-11-29 14:29:19.818398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:38.104 [2024-11-29 14:29:19.818417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.820195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.820341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:38.104 [2024-11-29 14:29:19.820404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:21:38.104 [2024-11-29 14:29:19.820435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.820482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.820526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:38.104 [2024-11-29 14:29:19.820555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:38.104 [2024-11-29 14:29:19.820575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.820627] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:38.104 [2024-11-29 14:29:19.820699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.820721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:38.104 [2024-11-29 14:29:19.820742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:38.104 [2024-11-29 14:29:19.820761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.826356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.826541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:38.104 [2024-11-29 14:29:19.826561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.555 ms 00:21:38.104 [2024-11-29 14:29:19.826569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.827031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.104 [2024-11-29 14:29:19.827188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:38.104 [2024-11-29 14:29:19.827201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:38.104 [2024-11-29 14:29:19.827211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.104 [2024-11-29 14:29:19.828470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.020 ms, result 0 00:21:39.492  [2024-11-29T14:29:22.231Z] Copying: 7512/1048576 [kB] (7512 kBps) [2024-11-29T14:29:23.176Z] Copying: 17/1024 [MB] (10 MBps) [2024-11-29T14:29:24.120Z] Copying: 38/1024 [MB] (20 MBps) [2024-11-29T14:29:25.064Z] Copying: 51/1024 [MB] (12 MBps) [2024-11-29T14:29:26.451Z] Copying: 72/1024 [MB] (21 MBps) [2024-11-29T14:29:27.023Z] Copying: 100/1024 [MB] (27 MBps) [2024-11-29T14:29:28.411Z] Copying: 111/1024 [MB] (11 MBps) [2024-11-29T14:29:29.356Z] Copying: 121/1024 [MB] (10 MBps) [2024-11-29T14:29:30.301Z] Copying: 153/1024 [MB] (32 MBps) [2024-11-29T14:29:31.310Z] Copying: 171/1024 [MB] (18 MBps) [2024-11-29T14:29:32.257Z] Copying: 187/1024 [MB] (15 MBps) [2024-11-29T14:29:33.203Z] Copying: 203/1024 [MB] (15 MBps) [2024-11-29T14:29:34.146Z] Copying: 221/1024 [MB] (17 MBps) [2024-11-29T14:29:35.091Z] Copying: 237/1024 [MB] (16 MBps) [2024-11-29T14:29:36.033Z] Copying: 260/1024 [MB] (22 MBps) [2024-11-29T14:29:37.414Z] Copying: 272/1024 [MB] (11 MBps) [2024-11-29T14:29:38.354Z] Copying: 292/1024 [MB] (19 MBps) [2024-11-29T14:29:39.295Z] Copying: 315/1024 [MB] (23 MBps) [2024-11-29T14:29:40.237Z] Copying: 329/1024 [MB] (14 MBps) [2024-11-29T14:29:41.183Z] Copying: 346/1024 [MB] (16 MBps) [2024-11-29T14:29:42.129Z] Copying: 365/1024 [MB] (19 MBps) [2024-11-29T14:29:43.074Z] Copying: 388/1024 [MB] (23 MBps) [2024-11-29T14:29:44.019Z] Copying: 408/1024 [MB] (20 MBps) [2024-11-29T14:29:45.405Z] Copying: 424/1024 [MB] (15 MBps) [2024-11-29T14:29:46.348Z] Copying: 442/1024 [MB] (17 MBps) [2024-11-29T14:29:47.294Z] Copying: 452/1024 [MB] (10 MBps) [2024-11-29T14:29:48.238Z] Copying: 463/1024 [MB] (10 MBps) [2024-11-29T14:29:49.184Z] Copying: 476/1024 [MB] (12 MBps) [2024-11-29T14:29:50.128Z] Copying: 495/1024 [MB] (19 MBps) [2024-11-29T14:29:51.072Z] Copying: 506/1024 [MB] (11 MBps) [2024-11-29T14:29:52.458Z] Copying: 516/1024 [MB] (10 MBps) [2024-11-29T14:29:53.028Z] Copying: 527/1024 [MB] (10 MBps) [2024-11-29T14:29:54.414Z] Copying: 537/1024 [MB] (10 MBps) [2024-11-29T14:29:55.353Z] Copying: 548/1024 [MB] (10 MBps) [2024-11-29T14:29:56.294Z] Copying: 566/1024 [MB] (18 MBps) [2024-11-29T14:29:57.236Z] Copying: 577/1024 [MB] (11 MBps) [2024-11-29T14:29:58.180Z] Copying: 588/1024 [MB] (10 MBps) [2024-11-29T14:29:59.125Z] Copying: 605/1024 [MB] (16 MBps) [2024-11-29T14:30:00.069Z] Copying: 617/1024 [MB] (11 MBps) [2024-11-29T14:30:01.081Z] Copying: 636/1024 [MB] (19 MBps) [2024-11-29T14:30:02.024Z] Copying: 652/1024 [MB] (16 MBps) [2024-11-29T14:30:03.414Z] Copying: 671/1024 [MB] (18 MBps) [2024-11-29T14:30:04.365Z] Copying: 687/1024 [MB] (16 MBps) [2024-11-29T14:30:05.309Z] Copying: 705/1024 [MB] (17 MBps) [2024-11-29T14:30:06.255Z] Copying: 722/1024 [MB] (17 MBps) [2024-11-29T14:30:07.201Z] Copying: 746/1024 [MB] (23 MBps) [2024-11-29T14:30:08.146Z] Copying: 762/1024 [MB] (16 MBps) [2024-11-29T14:30:09.091Z] Copying: 785/1024 [MB] (22 MBps) [2024-11-29T14:30:10.034Z] Copying: 807/1024 [MB] (22 MBps) [2024-11-29T14:30:11.423Z] Copying: 828/1024 [MB] (20 MBps) [2024-11-29T14:30:12.367Z] Copying: 844/1024 [MB] (15 MBps) [2024-11-29T14:30:13.311Z] Copying: 859/1024 [MB] (15 MBps) [2024-11-29T14:30:14.255Z] Copying: 877/1024 [MB] (18 MBps) [2024-11-29T14:30:15.209Z] Copying: 890/1024 [MB] (12 MBps) [2024-11-29T14:30:16.154Z] Copying: 904/1024 [MB] (14 MBps) [2024-11-29T14:30:17.108Z] Copying: 915/1024 [MB] (11 MBps) [2024-11-29T14:30:18.054Z] Copying: 926/1024 [MB] (11 MBps) [2024-11-29T14:30:19.440Z] Copying: 937/1024 [MB] (10 MBps) [2024-11-29T14:30:20.385Z] Copying: 949/1024 [MB] (11 MBps) [2024-11-29T14:30:21.330Z] Copying: 966/1024 [MB] (16 MBps) [2024-11-29T14:30:22.275Z] Copying: 976/1024 [MB] (10 MBps) [2024-11-29T14:30:23.220Z] Copying: 995/1024 [MB] (19 MBps) [2024-11-29T14:30:24.164Z] Copying: 1008/1024 [MB] (12 MBps) [2024-11-29T14:30:24.738Z] Copying: 1018/1024 [MB] (10 MBps) [2024-11-29T14:30:25.000Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-29 14:30:24.912197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.206 [2024-11-29 14:30:24.912287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:43.206 [2024-11-29 14:30:24.912304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:43.206 [2024-11-29 14:30:24.912314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.206 [2024-11-29 14:30:24.912339] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:43.206 [2024-11-29 14:30:24.913577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.206 [2024-11-29 14:30:24.913614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:43.206 [2024-11-29 14:30:24.913636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:22:43.206 [2024-11-29 14:30:24.913651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.206 [2024-11-29 14:30:24.913899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.206 [2024-11-29 14:30:24.913911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:43.206 [2024-11-29 14:30:24.913920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:22:43.206 [2024-11-29 14:30:24.913929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.206 [2024-11-29 14:30:24.919878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.206 [2024-11-29 14:30:24.919927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:43.206 [2024-11-29 14:30:24.919948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.932 ms 00:22:43.206 [2024-11-29 14:30:24.919956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.206 [2024-11-29 14:30:24.926190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.206 [2024-11-29 14:30:24.926395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:43.206 [2024-11-29 14:30:24.926417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.186 ms 00:22:43.206 [2024-11-29 14:30:24.926425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.206 [2024-11-29 14:30:24.929310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.206 [2024-11-29 14:30:24.929360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:43.206 [2024-11-29 14:30:24.929371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.787 ms 00:22:43.206 [2024-11-29 14:30:24.929379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.206 [2024-11-29 14:30:24.935803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.206 [2024-11-29 14:30:24.936399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:43.206 [2024-11-29 14:30:24.936428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.379 ms 00:22:43.206 [2024-11-29 14:30:24.936439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.794 [2024-11-29 14:30:25.287184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.794 [2024-11-29 14:30:25.287256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:43.794 [2024-11-29 14:30:25.287269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 350.640 ms 00:22:43.794 [2024-11-29 14:30:25.287278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.794 [2024-11-29 14:30:25.290419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.794 [2024-11-29 14:30:25.290467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:43.794 [2024-11-29 14:30:25.290478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.122 ms 00:22:43.794 [2024-11-29 14:30:25.290486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.794 [2024-11-29 14:30:25.293251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.794 [2024-11-29 14:30:25.293448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:43.794 [2024-11-29 14:30:25.293466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:22:43.794 [2024-11-29 14:30:25.293475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.794 [2024-11-29 14:30:25.295869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.794 [2024-11-29 14:30:25.295916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:43.794 [2024-11-29 14:30:25.295927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.342 ms 00:22:43.794 [2024-11-29 14:30:25.295935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.794 [2024-11-29 14:30:25.298035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.794 [2024-11-29 14:30:25.298183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:43.794 [2024-11-29 14:30:25.298242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.030 ms 00:22:43.794 [2024-11-29 14:30:25.298265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.794 [2024-11-29 14:30:25.298309] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:43.794 [2024-11-29 14:30:25.298340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:43.794 [2024-11-29 14:30:25.298373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.298980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.299010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:43.794 [2024-11-29 14:30:25.299017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:43.795 [2024-11-29 14:30:25.299801] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:43.795 [2024-11-29 14:30:25.299810] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9176fde2-a4ae-4ddb-9d8b-b6480a765c80 00:22:43.795 [2024-11-29 14:30:25.299819] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:43.795 [2024-11-29 14:30:25.299836] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 35008 00:22:43.795 [2024-11-29 14:30:25.299843] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 34048 00:22:43.795 [2024-11-29 14:30:25.299860] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0282 00:22:43.795 [2024-11-29 14:30:25.299868] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:43.795 [2024-11-29 14:30:25.299877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:43.795 [2024-11-29 14:30:25.299886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:43.795 [2024-11-29 14:30:25.299893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:43.795 [2024-11-29 14:30:25.299900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:43.795 [2024-11-29 14:30:25.299908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.795 [2024-11-29 14:30:25.299916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:43.795 [2024-11-29 14:30:25.299925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.600 ms 00:22:43.795 [2024-11-29 14:30:25.299932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.795 [2024-11-29 14:30:25.302261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.795 [2024-11-29 14:30:25.302307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:43.795 [2024-11-29 14:30:25.302316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.306 ms 00:22:43.795 [2024-11-29 14:30:25.302325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.795 [2024-11-29 14:30:25.302454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.795 [2024-11-29 14:30:25.302464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:43.796 [2024-11-29 14:30:25.302474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:22:43.796 [2024-11-29 14:30:25.302482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-11-29 14:30:25.309437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.796 [2024-11-29 14:30:25.309512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:43.796 [2024-11-29 14:30:25.309523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.796 [2024-11-29 14:30:25.309531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-11-29 14:30:25.309604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.796 [2024-11-29 14:30:25.309614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:43.796 [2024-11-29 14:30:25.309623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.796 [2024-11-29 14:30:25.309630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-11-29 14:30:25.309696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.796 [2024-11-29 14:30:25.309711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:43.796 [2024-11-29 14:30:25.309720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.796 [2024-11-29 14:30:25.309728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-11-29 14:30:25.309742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.796 [2024-11-29 14:30:25.309751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:43.796 [2024-11-29 14:30:25.309758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.796 [2024-11-29 14:30:25.309766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-11-29 14:30:25.323987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.796 [2024-11-29 14:30:25.324039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:43.796 [2024-11-29 14:30:25.324051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.796 [2024-11-29 14:30:25.324059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-11-29 14:30:25.334881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.796 [2024-11-29 14:30:25.335101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:43.796 [2024-11-29 14:30:25.335122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.796 [2024-11-29 14:30:25.335132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-11-29 14:30:25.335218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.796 [2024-11-29 14:30:25.335230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:43.796 [2024-11-29 14:30:25.335243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.796 [2024-11-29 14:30:25.335251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-11-29 14:30:25.335289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.796 [2024-11-29 14:30:25.335299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:43.796 [2024-11-29 14:30:25.335307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.796 [2024-11-29 14:30:25.335315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-11-29 14:30:25.335398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.796 [2024-11-29 14:30:25.335408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:43.796 [2024-11-29 14:30:25.335417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.796 [2024-11-29 14:30:25.335433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-11-29 14:30:25.335466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.796 [2024-11-29 14:30:25.335476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:43.796 [2024-11-29 14:30:25.335484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.796 [2024-11-29 14:30:25.335517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-11-29 14:30:25.335564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.796 [2024-11-29 14:30:25.335574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:43.796 [2024-11-29 14:30:25.335583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.796 [2024-11-29 14:30:25.335595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-11-29 14:30:25.335645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.796 [2024-11-29 14:30:25.335657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:43.796 [2024-11-29 14:30:25.335667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.796 [2024-11-29 14:30:25.335676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.796 [2024-11-29 14:30:25.335808] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 423.580 ms, result 0 00:22:43.796 00:22:43.796 00:22:43.796 14:30:25 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:46.342 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:46.342 14:30:27 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:46.342 14:30:27 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:46.342 14:30:27 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:46.342 14:30:27 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:46.342 14:30:27 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:46.342 14:30:27 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 86241 00:22:46.342 14:30:27 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86241 ']' 00:22:46.342 14:30:27 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86241 00:22:46.342 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86241) - No such process 00:22:46.342 Process with pid 86241 is not found 00:22:46.342 14:30:27 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 86241 is not found' 00:22:46.342 14:30:27 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:46.342 Remove shared memory files 00:22:46.342 14:30:27 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:46.342 14:30:27 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:46.342 14:30:27 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:46.342 14:30:27 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:46.342 14:30:27 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:46.342 14:30:27 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:46.342 ************************************ 00:22:46.342 END TEST ftl_restore 00:22:46.342 ************************************ 00:22:46.342 00:22:46.342 real 4m53.112s 00:22:46.342 user 4m40.431s 00:22:46.342 sys 0m12.434s 00:22:46.342 14:30:27 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:46.342 14:30:27 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:46.342 14:30:27 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:46.342 14:30:27 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:46.342 14:30:27 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:46.342 14:30:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:46.342 ************************************ 00:22:46.342 START TEST ftl_dirty_shutdown 00:22:46.342 ************************************ 00:22:46.342 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:46.342 * Looking for test storage... 00:22:46.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:46.342 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:46.342 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:46.342 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:46.603 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:46.603 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.603 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.603 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.603 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.603 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:46.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.604 --rc genhtml_branch_coverage=1 00:22:46.604 --rc genhtml_function_coverage=1 00:22:46.604 --rc genhtml_legend=1 00:22:46.604 --rc geninfo_all_blocks=1 00:22:46.604 --rc geninfo_unexecuted_blocks=1 00:22:46.604 00:22:46.604 ' 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:46.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.604 --rc genhtml_branch_coverage=1 00:22:46.604 --rc genhtml_function_coverage=1 00:22:46.604 --rc genhtml_legend=1 00:22:46.604 --rc geninfo_all_blocks=1 00:22:46.604 --rc geninfo_unexecuted_blocks=1 00:22:46.604 00:22:46.604 ' 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:46.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.604 --rc genhtml_branch_coverage=1 00:22:46.604 --rc genhtml_function_coverage=1 00:22:46.604 --rc genhtml_legend=1 00:22:46.604 --rc geninfo_all_blocks=1 00:22:46.604 --rc geninfo_unexecuted_blocks=1 00:22:46.604 00:22:46.604 ' 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:46.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.604 --rc genhtml_branch_coverage=1 00:22:46.604 --rc genhtml_function_coverage=1 00:22:46.604 --rc genhtml_legend=1 00:22:46.604 --rc geninfo_all_blocks=1 00:22:46.604 --rc geninfo_unexecuted_blocks=1 00:22:46.604 00:22:46.604 ' 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89359 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89359 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89359 ']' 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:46.604 14:30:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:46.604 [2024-11-29 14:30:28.270402] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:46.604 [2024-11-29 14:30:28.270865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89359 ] 00:22:46.865 [2024-11-29 14:30:28.421203] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.865 [2024-11-29 14:30:28.471456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.446 14:30:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:47.446 14:30:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:47.446 14:30:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:47.446 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:47.446 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:47.446 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:47.446 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:47.446 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:47.707 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:47.707 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:47.707 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:47.707 14:30:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:47.707 14:30:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:47.707 14:30:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:47.707 14:30:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:47.707 14:30:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:47.969 14:30:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:47.969 { 00:22:47.969 "name": "nvme0n1", 00:22:47.969 "aliases": [ 00:22:47.969 "a7faa87e-6ef9-4d8c-a3d9-21048564aa5d" 00:22:47.969 ], 00:22:47.969 "product_name": "NVMe disk", 00:22:47.969 "block_size": 4096, 00:22:47.969 "num_blocks": 1310720, 00:22:47.969 "uuid": "a7faa87e-6ef9-4d8c-a3d9-21048564aa5d", 00:22:47.969 "numa_id": -1, 00:22:47.969 "assigned_rate_limits": { 00:22:47.969 "rw_ios_per_sec": 0, 00:22:47.969 "rw_mbytes_per_sec": 0, 00:22:47.969 "r_mbytes_per_sec": 0, 00:22:47.969 "w_mbytes_per_sec": 0 00:22:47.969 }, 00:22:47.969 "claimed": true, 00:22:47.969 "claim_type": "read_many_write_one", 00:22:47.969 "zoned": false, 00:22:47.969 "supported_io_types": { 00:22:47.969 "read": true, 00:22:47.969 "write": true, 00:22:47.969 "unmap": true, 00:22:47.969 "flush": true, 00:22:47.969 "reset": true, 00:22:47.969 "nvme_admin": true, 00:22:47.969 "nvme_io": true, 00:22:47.969 "nvme_io_md": false, 00:22:47.969 "write_zeroes": true, 00:22:47.969 "zcopy": false, 00:22:47.969 "get_zone_info": false, 00:22:47.969 "zone_management": false, 00:22:47.969 "zone_append": false, 00:22:47.969 "compare": true, 00:22:47.969 "compare_and_write": false, 00:22:47.969 "abort": true, 00:22:47.969 "seek_hole": false, 00:22:47.969 "seek_data": false, 00:22:47.969 "copy": true, 00:22:47.969 "nvme_iov_md": false 00:22:47.969 }, 00:22:47.969 "driver_specific": { 00:22:47.969 "nvme": [ 00:22:47.969 { 00:22:47.969 "pci_address": "0000:00:11.0", 00:22:47.969 "trid": { 00:22:47.969 "trtype": "PCIe", 00:22:47.969 "traddr": "0000:00:11.0" 00:22:47.969 }, 00:22:47.969 "ctrlr_data": { 00:22:47.969 "cntlid": 0, 00:22:47.969 "vendor_id": "0x1b36", 00:22:47.969 "model_number": "QEMU NVMe Ctrl", 00:22:47.969 "serial_number": "12341", 00:22:47.969 "firmware_revision": "8.0.0", 00:22:47.969 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:47.969 "oacs": { 00:22:47.969 "security": 0, 00:22:47.969 "format": 1, 00:22:47.969 "firmware": 0, 00:22:47.969 "ns_manage": 1 00:22:47.969 }, 00:22:47.969 "multi_ctrlr": false, 00:22:47.969 "ana_reporting": false 00:22:47.969 }, 00:22:47.969 "vs": { 00:22:47.969 "nvme_version": "1.4" 00:22:47.969 }, 00:22:47.969 "ns_data": { 00:22:47.969 "id": 1, 00:22:47.969 "can_share": false 00:22:47.969 } 00:22:47.969 } 00:22:47.969 ], 00:22:47.969 "mp_policy": "active_passive" 00:22:47.969 } 00:22:47.969 } 00:22:47.969 ]' 00:22:47.969 14:30:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:47.969 14:30:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:47.969 14:30:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:47.969 14:30:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:47.969 14:30:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:47.969 14:30:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:47.969 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:47.969 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:47.969 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:47.969 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:47.969 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:48.231 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=3c32d854-be65-4395-be63-38e4078d06d0 00:22:48.231 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:48.231 14:30:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3c32d854-be65-4395-be63-38e4078d06d0 00:22:48.492 14:30:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:48.754 14:30:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=afdaff27-a6c1-4749-90f8-7c741451b014 00:22:48.754 14:30:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u afdaff27-a6c1-4749-90f8-7c741451b014 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=1ab85bee-ac52-4dec-9c09-aea9342112cb 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1ab85bee-ac52-4dec-9c09-aea9342112cb 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=1ab85bee-ac52-4dec-9c09-aea9342112cb 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 1ab85bee-ac52-4dec-9c09-aea9342112cb 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=1ab85bee-ac52-4dec-9c09-aea9342112cb 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1ab85bee-ac52-4dec-9c09-aea9342112cb 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:49.016 { 00:22:49.016 "name": "1ab85bee-ac52-4dec-9c09-aea9342112cb", 00:22:49.016 "aliases": [ 00:22:49.016 "lvs/nvme0n1p0" 00:22:49.016 ], 00:22:49.016 "product_name": "Logical Volume", 00:22:49.016 "block_size": 4096, 00:22:49.016 "num_blocks": 26476544, 00:22:49.016 "uuid": "1ab85bee-ac52-4dec-9c09-aea9342112cb", 00:22:49.016 "assigned_rate_limits": { 00:22:49.016 "rw_ios_per_sec": 0, 00:22:49.016 "rw_mbytes_per_sec": 0, 00:22:49.016 "r_mbytes_per_sec": 0, 00:22:49.016 "w_mbytes_per_sec": 0 00:22:49.016 }, 00:22:49.016 "claimed": false, 00:22:49.016 "zoned": false, 00:22:49.016 "supported_io_types": { 00:22:49.016 "read": true, 00:22:49.016 "write": true, 00:22:49.016 "unmap": true, 00:22:49.016 "flush": false, 00:22:49.016 "reset": true, 00:22:49.016 "nvme_admin": false, 00:22:49.016 "nvme_io": false, 00:22:49.016 "nvme_io_md": false, 00:22:49.016 "write_zeroes": true, 00:22:49.016 "zcopy": false, 00:22:49.016 "get_zone_info": false, 00:22:49.016 "zone_management": false, 00:22:49.016 "zone_append": false, 00:22:49.016 "compare": false, 00:22:49.016 "compare_and_write": false, 00:22:49.016 "abort": false, 00:22:49.016 "seek_hole": true, 00:22:49.016 "seek_data": true, 00:22:49.016 "copy": false, 00:22:49.016 "nvme_iov_md": false 00:22:49.016 }, 00:22:49.016 "driver_specific": { 00:22:49.016 "lvol": { 00:22:49.016 "lvol_store_uuid": "afdaff27-a6c1-4749-90f8-7c741451b014", 00:22:49.016 "base_bdev": "nvme0n1", 00:22:49.016 "thin_provision": true, 00:22:49.016 "num_allocated_clusters": 0, 00:22:49.016 "snapshot": false, 00:22:49.016 "clone": false, 00:22:49.016 "esnap_clone": false 00:22:49.016 } 00:22:49.016 } 00:22:49.016 } 00:22:49.016 ]' 00:22:49.016 14:30:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:49.278 14:30:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:49.278 14:30:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:49.278 14:30:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:49.278 14:30:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:49.278 14:30:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:49.278 14:30:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:49.278 14:30:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:49.278 14:30:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:49.539 14:30:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:49.539 14:30:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:49.539 14:30:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 1ab85bee-ac52-4dec-9c09-aea9342112cb 00:22:49.539 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=1ab85bee-ac52-4dec-9c09-aea9342112cb 00:22:49.539 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:49.539 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:49.539 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:49.539 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1ab85bee-ac52-4dec-9c09-aea9342112cb 00:22:49.801 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:49.801 { 00:22:49.801 "name": "1ab85bee-ac52-4dec-9c09-aea9342112cb", 00:22:49.801 "aliases": [ 00:22:49.801 "lvs/nvme0n1p0" 00:22:49.801 ], 00:22:49.801 "product_name": "Logical Volume", 00:22:49.801 "block_size": 4096, 00:22:49.801 "num_blocks": 26476544, 00:22:49.801 "uuid": "1ab85bee-ac52-4dec-9c09-aea9342112cb", 00:22:49.801 "assigned_rate_limits": { 00:22:49.801 "rw_ios_per_sec": 0, 00:22:49.801 "rw_mbytes_per_sec": 0, 00:22:49.801 "r_mbytes_per_sec": 0, 00:22:49.801 "w_mbytes_per_sec": 0 00:22:49.801 }, 00:22:49.801 "claimed": false, 00:22:49.801 "zoned": false, 00:22:49.801 "supported_io_types": { 00:22:49.801 "read": true, 00:22:49.801 "write": true, 00:22:49.801 "unmap": true, 00:22:49.801 "flush": false, 00:22:49.801 "reset": true, 00:22:49.801 "nvme_admin": false, 00:22:49.801 "nvme_io": false, 00:22:49.801 "nvme_io_md": false, 00:22:49.801 "write_zeroes": true, 00:22:49.801 "zcopy": false, 00:22:49.801 "get_zone_info": false, 00:22:49.801 "zone_management": false, 00:22:49.801 "zone_append": false, 00:22:49.801 "compare": false, 00:22:49.801 "compare_and_write": false, 00:22:49.801 "abort": false, 00:22:49.801 "seek_hole": true, 00:22:49.801 "seek_data": true, 00:22:49.801 "copy": false, 00:22:49.801 "nvme_iov_md": false 00:22:49.801 }, 00:22:49.801 "driver_specific": { 00:22:49.801 "lvol": { 00:22:49.801 "lvol_store_uuid": "afdaff27-a6c1-4749-90f8-7c741451b014", 00:22:49.801 "base_bdev": "nvme0n1", 00:22:49.801 "thin_provision": true, 00:22:49.801 "num_allocated_clusters": 0, 00:22:49.801 "snapshot": false, 00:22:49.801 "clone": false, 00:22:49.801 "esnap_clone": false 00:22:49.801 } 00:22:49.801 } 00:22:49.801 } 00:22:49.801 ]' 00:22:49.801 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:49.801 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:49.801 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:49.801 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:49.801 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:49.801 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:49.801 14:30:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:49.801 14:30:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:50.063 14:30:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:50.063 14:30:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 1ab85bee-ac52-4dec-9c09-aea9342112cb 00:22:50.063 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=1ab85bee-ac52-4dec-9c09-aea9342112cb 00:22:50.063 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:50.063 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:50.063 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:50.063 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1ab85bee-ac52-4dec-9c09-aea9342112cb 00:22:50.063 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:50.063 { 00:22:50.063 "name": "1ab85bee-ac52-4dec-9c09-aea9342112cb", 00:22:50.063 "aliases": [ 00:22:50.063 "lvs/nvme0n1p0" 00:22:50.063 ], 00:22:50.063 "product_name": "Logical Volume", 00:22:50.063 "block_size": 4096, 00:22:50.063 "num_blocks": 26476544, 00:22:50.063 "uuid": "1ab85bee-ac52-4dec-9c09-aea9342112cb", 00:22:50.063 "assigned_rate_limits": { 00:22:50.063 "rw_ios_per_sec": 0, 00:22:50.063 "rw_mbytes_per_sec": 0, 00:22:50.063 "r_mbytes_per_sec": 0, 00:22:50.063 "w_mbytes_per_sec": 0 00:22:50.063 }, 00:22:50.063 "claimed": false, 00:22:50.063 "zoned": false, 00:22:50.063 "supported_io_types": { 00:22:50.063 "read": true, 00:22:50.063 "write": true, 00:22:50.063 "unmap": true, 00:22:50.063 "flush": false, 00:22:50.063 "reset": true, 00:22:50.063 "nvme_admin": false, 00:22:50.063 "nvme_io": false, 00:22:50.063 "nvme_io_md": false, 00:22:50.063 "write_zeroes": true, 00:22:50.063 "zcopy": false, 00:22:50.063 "get_zone_info": false, 00:22:50.063 "zone_management": false, 00:22:50.063 "zone_append": false, 00:22:50.063 "compare": false, 00:22:50.063 "compare_and_write": false, 00:22:50.063 "abort": false, 00:22:50.063 "seek_hole": true, 00:22:50.063 "seek_data": true, 00:22:50.063 "copy": false, 00:22:50.063 "nvme_iov_md": false 00:22:50.063 }, 00:22:50.063 "driver_specific": { 00:22:50.063 "lvol": { 00:22:50.063 "lvol_store_uuid": "afdaff27-a6c1-4749-90f8-7c741451b014", 00:22:50.063 "base_bdev": "nvme0n1", 00:22:50.063 "thin_provision": true, 00:22:50.063 "num_allocated_clusters": 0, 00:22:50.063 "snapshot": false, 00:22:50.063 "clone": false, 00:22:50.063 "esnap_clone": false 00:22:50.063 } 00:22:50.063 } 00:22:50.063 } 00:22:50.063 ]' 00:22:50.063 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:50.063 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:50.063 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:50.329 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:50.329 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:50.329 14:30:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:50.329 14:30:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:50.329 14:30:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1ab85bee-ac52-4dec-9c09-aea9342112cb --l2p_dram_limit 10' 00:22:50.329 14:30:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:50.329 14:30:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:50.329 14:30:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:50.329 14:30:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1ab85bee-ac52-4dec-9c09-aea9342112cb --l2p_dram_limit 10 -c nvc0n1p0 00:22:50.329 [2024-11-29 14:30:32.046830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.329 [2024-11-29 14:30:32.046869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:50.329 [2024-11-29 14:30:32.046880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:50.329 [2024-11-29 14:30:32.046887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.329 [2024-11-29 14:30:32.046932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.329 [2024-11-29 14:30:32.046941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:50.329 [2024-11-29 14:30:32.046947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:50.329 [2024-11-29 14:30:32.046955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.329 [2024-11-29 14:30:32.046974] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:50.329 [2024-11-29 14:30:32.047183] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:50.329 [2024-11-29 14:30:32.047193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.329 [2024-11-29 14:30:32.047201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:50.329 [2024-11-29 14:30:32.047209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:22:50.329 [2024-11-29 14:30:32.047217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.329 [2024-11-29 14:30:32.047240] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ee36d1dd-8366-42ba-9e96-84f84953a1f1 00:22:50.329 [2024-11-29 14:30:32.048220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.329 [2024-11-29 14:30:32.048240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:50.329 [2024-11-29 14:30:32.048252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:50.329 [2024-11-29 14:30:32.048259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.329 [2024-11-29 14:30:32.052937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.329 [2024-11-29 14:30:32.052964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:50.329 [2024-11-29 14:30:32.052976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.619 ms 00:22:50.329 [2024-11-29 14:30:32.052982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.329 [2024-11-29 14:30:32.053040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.329 [2024-11-29 14:30:32.053047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:50.329 [2024-11-29 14:30:32.053055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:50.329 [2024-11-29 14:30:32.053062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.329 [2024-11-29 14:30:32.053092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.329 [2024-11-29 14:30:32.053099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:50.329 [2024-11-29 14:30:32.053107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:50.329 [2024-11-29 14:30:32.053112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.329 [2024-11-29 14:30:32.053129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:50.329 [2024-11-29 14:30:32.054378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.329 [2024-11-29 14:30:32.054405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:50.329 [2024-11-29 14:30:32.054413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:22:50.329 [2024-11-29 14:30:32.054420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.329 [2024-11-29 14:30:32.054444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.329 [2024-11-29 14:30:32.054455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:50.329 [2024-11-29 14:30:32.054461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:50.329 [2024-11-29 14:30:32.054469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.329 [2024-11-29 14:30:32.054488] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:50.329 [2024-11-29 14:30:32.054614] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:50.330 [2024-11-29 14:30:32.054623] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:50.330 [2024-11-29 14:30:32.054633] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:50.330 [2024-11-29 14:30:32.054640] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:50.330 [2024-11-29 14:30:32.054652] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:50.330 [2024-11-29 14:30:32.054658] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:50.330 [2024-11-29 14:30:32.054667] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:50.330 [2024-11-29 14:30:32.054673] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:50.330 [2024-11-29 14:30:32.054679] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:50.330 [2024-11-29 14:30:32.054687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.330 [2024-11-29 14:30:32.054693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:50.330 [2024-11-29 14:30:32.054700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:22:50.330 [2024-11-29 14:30:32.054707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.330 [2024-11-29 14:30:32.054769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.330 [2024-11-29 14:30:32.054779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:50.330 [2024-11-29 14:30:32.054784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:50.330 [2024-11-29 14:30:32.054791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.330 [2024-11-29 14:30:32.054862] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:50.330 [2024-11-29 14:30:32.054872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:50.330 [2024-11-29 14:30:32.054878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.330 [2024-11-29 14:30:32.054885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.330 [2024-11-29 14:30:32.054891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:50.330 [2024-11-29 14:30:32.054897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:50.330 [2024-11-29 14:30:32.054903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:50.330 [2024-11-29 14:30:32.054911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:50.330 [2024-11-29 14:30:32.054916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:50.330 [2024-11-29 14:30:32.054923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.330 [2024-11-29 14:30:32.054929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:50.330 [2024-11-29 14:30:32.054936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:50.330 [2024-11-29 14:30:32.054940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.330 [2024-11-29 14:30:32.054948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:50.330 [2024-11-29 14:30:32.054954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:50.330 [2024-11-29 14:30:32.054960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.330 [2024-11-29 14:30:32.054965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:50.330 [2024-11-29 14:30:32.054971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:50.330 [2024-11-29 14:30:32.054976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.330 [2024-11-29 14:30:32.054983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:50.330 [2024-11-29 14:30:32.055004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:50.330 [2024-11-29 14:30:32.055010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.330 [2024-11-29 14:30:32.055015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:50.330 [2024-11-29 14:30:32.055021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:50.330 [2024-11-29 14:30:32.055027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.330 [2024-11-29 14:30:32.055034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:50.330 [2024-11-29 14:30:32.055040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:50.330 [2024-11-29 14:30:32.055047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.330 [2024-11-29 14:30:32.055053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:50.330 [2024-11-29 14:30:32.055062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:50.330 [2024-11-29 14:30:32.055068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.330 [2024-11-29 14:30:32.055075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:50.330 [2024-11-29 14:30:32.055081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:50.330 [2024-11-29 14:30:32.055089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.330 [2024-11-29 14:30:32.055095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:50.330 [2024-11-29 14:30:32.055101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:50.330 [2024-11-29 14:30:32.055107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.330 [2024-11-29 14:30:32.055114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:50.330 [2024-11-29 14:30:32.055120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:50.330 [2024-11-29 14:30:32.055127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.330 [2024-11-29 14:30:32.055132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:50.330 [2024-11-29 14:30:32.055140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:50.330 [2024-11-29 14:30:32.055146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.330 [2024-11-29 14:30:32.055153] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:50.330 [2024-11-29 14:30:32.055160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:50.330 [2024-11-29 14:30:32.055171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.330 [2024-11-29 14:30:32.055178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.330 [2024-11-29 14:30:32.055186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:50.330 [2024-11-29 14:30:32.055192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:50.330 [2024-11-29 14:30:32.055199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:50.330 [2024-11-29 14:30:32.055205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:50.330 [2024-11-29 14:30:32.055212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:50.330 [2024-11-29 14:30:32.055218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:50.330 [2024-11-29 14:30:32.055228] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:50.330 [2024-11-29 14:30:32.055238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.330 [2024-11-29 14:30:32.055247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:50.330 [2024-11-29 14:30:32.055254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:50.330 [2024-11-29 14:30:32.055261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:50.330 [2024-11-29 14:30:32.055268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:50.330 [2024-11-29 14:30:32.055275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:50.330 [2024-11-29 14:30:32.055281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:50.330 [2024-11-29 14:30:32.055290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:50.330 [2024-11-29 14:30:32.055296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:50.330 [2024-11-29 14:30:32.055303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:50.330 [2024-11-29 14:30:32.055309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:50.330 [2024-11-29 14:30:32.055316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:50.330 [2024-11-29 14:30:32.055322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:50.330 [2024-11-29 14:30:32.055330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:50.330 [2024-11-29 14:30:32.055336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:50.330 [2024-11-29 14:30:32.055344] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:50.330 [2024-11-29 14:30:32.055352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.330 [2024-11-29 14:30:32.055360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:50.330 [2024-11-29 14:30:32.055366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:50.330 [2024-11-29 14:30:32.055374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:50.330 [2024-11-29 14:30:32.055381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:50.330 [2024-11-29 14:30:32.055389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.330 [2024-11-29 14:30:32.055395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:50.330 [2024-11-29 14:30:32.055404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:22:50.330 [2024-11-29 14:30:32.055411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.330 [2024-11-29 14:30:32.055442] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:50.330 [2024-11-29 14:30:32.055449] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:54.590 [2024-11-29 14:30:36.138589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.590 [2024-11-29 14:30:36.138833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:54.590 [2024-11-29 14:30:36.138936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4083.125 ms 00:22:54.590 [2024-11-29 14:30:36.138963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.590 [2024-11-29 14:30:36.148595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.590 [2024-11-29 14:30:36.148744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.590 [2024-11-29 14:30:36.148804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.525 ms 00:22:54.590 [2024-11-29 14:30:36.148833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.590 [2024-11-29 14:30:36.148947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.590 [2024-11-29 14:30:36.148970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:54.590 [2024-11-29 14:30:36.149039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:54.591 [2024-11-29 14:30:36.149067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.157654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.157800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:54.591 [2024-11-29 14:30:36.157871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.530 ms 00:22:54.591 [2024-11-29 14:30:36.157895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.157949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.157975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:54.591 [2024-11-29 14:30:36.157997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:54.591 [2024-11-29 14:30:36.158056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.158434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.158658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:54.591 [2024-11-29 14:30:36.158674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:22:54.591 [2024-11-29 14:30:36.158687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.158794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.158803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:54.591 [2024-11-29 14:30:36.158814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:22:54.591 [2024-11-29 14:30:36.158821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.180637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.180712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:54.591 [2024-11-29 14:30:36.180742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.785 ms 00:22:54.591 [2024-11-29 14:30:36.180760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.190162] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:54.591 [2024-11-29 14:30:36.193156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.193195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:54.591 [2024-11-29 14:30:36.193206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.194 ms 00:22:54.591 [2024-11-29 14:30:36.193216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.276234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.276438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:54.591 [2024-11-29 14:30:36.276458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.992 ms 00:22:54.591 [2024-11-29 14:30:36.276471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.276713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.276728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:54.591 [2024-11-29 14:30:36.276737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:22:54.591 [2024-11-29 14:30:36.276747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.281697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.281744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:54.591 [2024-11-29 14:30:36.281755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.930 ms 00:22:54.591 [2024-11-29 14:30:36.281764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.285864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.285910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:54.591 [2024-11-29 14:30:36.285921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.026 ms 00:22:54.591 [2024-11-29 14:30:36.285930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.286242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.286254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:54.591 [2024-11-29 14:30:36.286263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:22:54.591 [2024-11-29 14:30:36.286274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.326756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.326805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:54.591 [2024-11-29 14:30:36.326816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.448 ms 00:22:54.591 [2024-11-29 14:30:36.326826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.332679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.332727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:54.591 [2024-11-29 14:30:36.332737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.784 ms 00:22:54.591 [2024-11-29 14:30:36.332747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.337434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.337478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:54.591 [2024-11-29 14:30:36.337488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.647 ms 00:22:54.591 [2024-11-29 14:30:36.337515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.342698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.342742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:54.591 [2024-11-29 14:30:36.342751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.144 ms 00:22:54.591 [2024-11-29 14:30:36.342763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.342806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.342818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:54.591 [2024-11-29 14:30:36.342827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:54.591 [2024-11-29 14:30:36.342837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.342904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.591 [2024-11-29 14:30:36.342915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:54.591 [2024-11-29 14:30:36.342923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:54.591 [2024-11-29 14:30:36.342940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.591 [2024-11-29 14:30:36.343944] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4296.637 ms, result 0 00:22:54.591 { 00:22:54.591 "name": "ftl0", 00:22:54.591 "uuid": "ee36d1dd-8366-42ba-9e96-84f84953a1f1" 00:22:54.591 } 00:22:54.591 14:30:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:54.591 14:30:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:54.853 14:30:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:54.853 14:30:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:54.853 14:30:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:55.116 /dev/nbd0 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:55.116 1+0 records in 00:22:55.116 1+0 records out 00:22:55.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462552 s, 8.9 MB/s 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:22:55.116 14:30:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:55.116 [2024-11-29 14:30:36.906510] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:55.116 [2024-11-29 14:30:36.906647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89507 ] 00:22:55.378 [2024-11-29 14:30:37.062633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.378 [2024-11-29 14:30:37.114347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.760  [2024-11-29T14:30:39.493Z] Copying: 193/1024 [MB] (193 MBps) [2024-11-29T14:30:40.436Z] Copying: 390/1024 [MB] (197 MBps) [2024-11-29T14:30:41.378Z] Copying: 630/1024 [MB] (239 MBps) [2024-11-29T14:30:41.950Z] Copying: 874/1024 [MB] (244 MBps) [2024-11-29T14:30:41.950Z] Copying: 1024/1024 [MB] (average 223 MBps) 00:23:00.156 00:23:00.156 14:30:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:02.692 14:30:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:02.692 [2024-11-29 14:30:43.953698] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:02.693 [2024-11-29 14:30:43.953815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89585 ] 00:23:02.693 [2024-11-29 14:30:44.099046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.693 [2024-11-29 14:30:44.132256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.631  [2024-11-29T14:30:46.354Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-29T14:30:47.293Z] Copying: 48/1024 [MB] (31 MBps) [2024-11-29T14:30:48.234Z] Copying: 77/1024 [MB] (29 MBps) [2024-11-29T14:30:49.620Z] Copying: 107/1024 [MB] (29 MBps) [2024-11-29T14:30:50.191Z] Copying: 138/1024 [MB] (31 MBps) [2024-11-29T14:30:51.578Z] Copying: 167/1024 [MB] (28 MBps) [2024-11-29T14:30:52.522Z] Copying: 196/1024 [MB] (29 MBps) [2024-11-29T14:30:53.463Z] Copying: 226/1024 [MB] (30 MBps) [2024-11-29T14:30:54.402Z] Copying: 260/1024 [MB] (33 MBps) [2024-11-29T14:30:55.343Z] Copying: 294/1024 [MB] (34 MBps) [2024-11-29T14:30:56.283Z] Copying: 325/1024 [MB] (30 MBps) [2024-11-29T14:30:57.222Z] Copying: 357/1024 [MB] (32 MBps) [2024-11-29T14:30:58.603Z] Copying: 392/1024 [MB] (34 MBps) [2024-11-29T14:30:59.547Z] Copying: 427/1024 [MB] (34 MBps) [2024-11-29T14:31:00.483Z] Copying: 457/1024 [MB] (30 MBps) [2024-11-29T14:31:01.420Z] Copying: 490/1024 [MB] (33 MBps) [2024-11-29T14:31:02.357Z] Copying: 527/1024 [MB] (37 MBps) [2024-11-29T14:31:03.298Z] Copying: 564/1024 [MB] (36 MBps) [2024-11-29T14:31:04.241Z] Copying: 599/1024 [MB] (35 MBps) [2024-11-29T14:31:05.183Z] Copying: 629/1024 [MB] (29 MBps) [2024-11-29T14:31:06.618Z] Copying: 661/1024 [MB] (32 MBps) [2024-11-29T14:31:07.188Z] Copying: 692/1024 [MB] (31 MBps) [2024-11-29T14:31:08.569Z] Copying: 726/1024 [MB] (33 MBps) [2024-11-29T14:31:09.507Z] Copying: 755/1024 [MB] (29 MBps) [2024-11-29T14:31:10.444Z] Copying: 786/1024 [MB] (30 MBps) [2024-11-29T14:31:11.384Z] Copying: 819/1024 [MB] (32 MBps) [2024-11-29T14:31:12.320Z] Copying: 849/1024 [MB] (30 MBps) [2024-11-29T14:31:13.262Z] Copying: 884/1024 [MB] (34 MBps) [2024-11-29T14:31:14.204Z] Copying: 917/1024 [MB] (32 MBps) [2024-11-29T14:31:15.585Z] Copying: 947/1024 [MB] (30 MBps) [2024-11-29T14:31:16.527Z] Copying: 980/1024 [MB] (33 MBps) [2024-11-29T14:31:16.788Z] Copying: 1011/1024 [MB] (30 MBps) [2024-11-29T14:31:16.788Z] Copying: 1024/1024 [MB] (average 31 MBps) 00:23:34.994 00:23:34.994 14:31:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:34.994 14:31:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:35.252 14:31:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:35.514 [2024-11-29 14:31:17.188750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.514 [2024-11-29 14:31:17.188792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:35.514 [2024-11-29 14:31:17.188807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:35.514 [2024-11-29 14:31:17.188814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.514 [2024-11-29 14:31:17.188836] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:35.514 [2024-11-29 14:31:17.189377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.514 [2024-11-29 14:31:17.189406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:35.514 [2024-11-29 14:31:17.189414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:23:35.514 [2024-11-29 14:31:17.189425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.514 [2024-11-29 14:31:17.191894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.514 [2024-11-29 14:31:17.191921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:35.514 [2024-11-29 14:31:17.191930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.450 ms 00:23:35.514 [2024-11-29 14:31:17.191937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.514 [2024-11-29 14:31:17.205262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.514 [2024-11-29 14:31:17.205289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:35.514 [2024-11-29 14:31:17.205297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.311 ms 00:23:35.514 [2024-11-29 14:31:17.205305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.514 [2024-11-29 14:31:17.209986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.514 [2024-11-29 14:31:17.210009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:35.514 [2024-11-29 14:31:17.210017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.650 ms 00:23:35.514 [2024-11-29 14:31:17.210026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.514 [2024-11-29 14:31:17.212073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.514 [2024-11-29 14:31:17.212103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:35.514 [2024-11-29 14:31:17.212110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.000 ms 00:23:35.514 [2024-11-29 14:31:17.212118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.514 [2024-11-29 14:31:17.217745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.514 [2024-11-29 14:31:17.217776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:35.514 [2024-11-29 14:31:17.217787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.600 ms 00:23:35.514 [2024-11-29 14:31:17.217795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.514 [2024-11-29 14:31:17.217889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.514 [2024-11-29 14:31:17.217899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:35.514 [2024-11-29 14:31:17.217906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:35.514 [2024-11-29 14:31:17.217914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.514 [2024-11-29 14:31:17.220517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.514 [2024-11-29 14:31:17.220541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:35.514 [2024-11-29 14:31:17.220549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:23:35.514 [2024-11-29 14:31:17.220557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.514 [2024-11-29 14:31:17.222701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.514 [2024-11-29 14:31:17.222731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:35.514 [2024-11-29 14:31:17.222738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:23:35.514 [2024-11-29 14:31:17.222746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.514 [2024-11-29 14:31:17.224552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.514 [2024-11-29 14:31:17.224580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:35.514 [2024-11-29 14:31:17.224587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.780 ms 00:23:35.514 [2024-11-29 14:31:17.224595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.514 [2024-11-29 14:31:17.226264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.514 [2024-11-29 14:31:17.226290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:35.514 [2024-11-29 14:31:17.226297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.622 ms 00:23:35.514 [2024-11-29 14:31:17.226304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.514 [2024-11-29 14:31:17.226329] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:35.514 [2024-11-29 14:31:17.226343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:35.514 [2024-11-29 14:31:17.226351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:35.514 [2024-11-29 14:31:17.226360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:35.514 [2024-11-29 14:31:17.226366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:35.514 [2024-11-29 14:31:17.226375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.226997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:35.515 [2024-11-29 14:31:17.227023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:35.516 [2024-11-29 14:31:17.227029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:35.516 [2024-11-29 14:31:17.227036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:35.516 [2024-11-29 14:31:17.227041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:35.516 [2024-11-29 14:31:17.227049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:35.516 [2024-11-29 14:31:17.227055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:35.516 [2024-11-29 14:31:17.227070] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:35.516 [2024-11-29 14:31:17.227077] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ee36d1dd-8366-42ba-9e96-84f84953a1f1 00:23:35.516 [2024-11-29 14:31:17.227087] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:35.516 [2024-11-29 14:31:17.227092] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:35.516 [2024-11-29 14:31:17.227099] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:35.516 [2024-11-29 14:31:17.227109] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:35.516 [2024-11-29 14:31:17.227117] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:35.516 [2024-11-29 14:31:17.227123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:35.516 [2024-11-29 14:31:17.227132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:35.516 [2024-11-29 14:31:17.227137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:35.516 [2024-11-29 14:31:17.227144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:35.516 [2024-11-29 14:31:17.227150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.516 [2024-11-29 14:31:17.227158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:35.516 [2024-11-29 14:31:17.227167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:23:35.516 [2024-11-29 14:31:17.227175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.228926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.516 [2024-11-29 14:31:17.228947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:35.516 [2024-11-29 14:31:17.228955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.737 ms 00:23:35.516 [2024-11-29 14:31:17.228964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.229051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.516 [2024-11-29 14:31:17.229061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:35.516 [2024-11-29 14:31:17.229067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:35.516 [2024-11-29 14:31:17.229074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.235160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.516 [2024-11-29 14:31:17.235188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:35.516 [2024-11-29 14:31:17.235195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.516 [2024-11-29 14:31:17.235204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.235247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.516 [2024-11-29 14:31:17.235255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:35.516 [2024-11-29 14:31:17.235261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.516 [2024-11-29 14:31:17.235273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.235330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.516 [2024-11-29 14:31:17.235342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:35.516 [2024-11-29 14:31:17.235349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.516 [2024-11-29 14:31:17.235356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.235370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.516 [2024-11-29 14:31:17.235378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:35.516 [2024-11-29 14:31:17.235383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.516 [2024-11-29 14:31:17.235391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.245724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.516 [2024-11-29 14:31:17.245755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:35.516 [2024-11-29 14:31:17.245764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.516 [2024-11-29 14:31:17.245773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.254876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.516 [2024-11-29 14:31:17.254910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:35.516 [2024-11-29 14:31:17.254919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.516 [2024-11-29 14:31:17.254928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.254994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.516 [2024-11-29 14:31:17.255015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:35.516 [2024-11-29 14:31:17.255022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.516 [2024-11-29 14:31:17.255030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.255061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.516 [2024-11-29 14:31:17.255071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:35.516 [2024-11-29 14:31:17.255077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.516 [2024-11-29 14:31:17.255085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.255149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.516 [2024-11-29 14:31:17.255161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:35.516 [2024-11-29 14:31:17.255167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.516 [2024-11-29 14:31:17.255175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.255199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.516 [2024-11-29 14:31:17.255208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:35.516 [2024-11-29 14:31:17.255215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.516 [2024-11-29 14:31:17.255222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.255255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.516 [2024-11-29 14:31:17.255267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:35.516 [2024-11-29 14:31:17.255275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.516 [2024-11-29 14:31:17.255282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.255321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.516 [2024-11-29 14:31:17.255335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:35.516 [2024-11-29 14:31:17.255342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.516 [2024-11-29 14:31:17.255350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.516 [2024-11-29 14:31:17.255470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.687 ms, result 0 00:23:35.516 true 00:23:35.516 14:31:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89359 00:23:35.516 14:31:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89359 00:23:35.516 14:31:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:35.776 [2024-11-29 14:31:17.346478] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:35.776 [2024-11-29 14:31:17.346606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89941 ] 00:23:35.776 [2024-11-29 14:31:17.492982] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.776 [2024-11-29 14:31:17.533447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.150  [2024-11-29T14:31:19.877Z] Copying: 256/1024 [MB] (256 MBps) [2024-11-29T14:31:20.811Z] Copying: 513/1024 [MB] (256 MBps) [2024-11-29T14:31:21.745Z] Copying: 766/1024 [MB] (253 MBps) [2024-11-29T14:31:21.745Z] Copying: 1015/1024 [MB] (248 MBps) [2024-11-29T14:31:22.005Z] Copying: 1024/1024 [MB] (average 253 MBps) 00:23:40.211 00:23:40.211 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89359 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:40.211 14:31:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:40.211 [2024-11-29 14:31:21.885970] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:40.211 [2024-11-29 14:31:21.886092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89994 ] 00:23:40.470 [2024-11-29 14:31:22.030693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.470 [2024-11-29 14:31:22.073384] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.470 [2024-11-29 14:31:22.173154] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.470 [2024-11-29 14:31:22.173215] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.470 [2024-11-29 14:31:22.235781] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:40.470 [2024-11-29 14:31:22.236254] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:40.470 [2024-11-29 14:31:22.236732] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:41.037 [2024-11-29 14:31:22.662582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.037 [2024-11-29 14:31:22.662618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:41.037 [2024-11-29 14:31:22.662629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:41.037 [2024-11-29 14:31:22.662637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.037 [2024-11-29 14:31:22.662679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.037 [2024-11-29 14:31:22.662690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:41.037 [2024-11-29 14:31:22.662701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:41.037 [2024-11-29 14:31:22.662707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.037 [2024-11-29 14:31:22.662719] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:41.037 [2024-11-29 14:31:22.662902] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:41.037 [2024-11-29 14:31:22.662917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.037 [2024-11-29 14:31:22.662923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:41.037 [2024-11-29 14:31:22.662932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:23:41.037 [2024-11-29 14:31:22.662937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.037 [2024-11-29 14:31:22.664204] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:41.037 [2024-11-29 14:31:22.667099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.037 [2024-11-29 14:31:22.667132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:41.037 [2024-11-29 14:31:22.667141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.895 ms 00:23:41.037 [2024-11-29 14:31:22.667147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.037 [2024-11-29 14:31:22.667191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.037 [2024-11-29 14:31:22.667198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:41.037 [2024-11-29 14:31:22.667208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:41.037 [2024-11-29 14:31:22.667216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.037 [2024-11-29 14:31:22.673484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.037 [2024-11-29 14:31:22.673516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:41.037 [2024-11-29 14:31:22.673523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.223 ms 00:23:41.037 [2024-11-29 14:31:22.673530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.037 [2024-11-29 14:31:22.673599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.037 [2024-11-29 14:31:22.673607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:41.037 [2024-11-29 14:31:22.673616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:41.037 [2024-11-29 14:31:22.673624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.037 [2024-11-29 14:31:22.673658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.037 [2024-11-29 14:31:22.673670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:41.037 [2024-11-29 14:31:22.673680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:41.037 [2024-11-29 14:31:22.673686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.038 [2024-11-29 14:31:22.673702] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:41.038 [2024-11-29 14:31:22.675272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.038 [2024-11-29 14:31:22.675295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:41.038 [2024-11-29 14:31:22.675302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:23:41.038 [2024-11-29 14:31:22.675308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.038 [2024-11-29 14:31:22.675337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.038 [2024-11-29 14:31:22.675344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:41.038 [2024-11-29 14:31:22.675350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:41.038 [2024-11-29 14:31:22.675359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.038 [2024-11-29 14:31:22.675374] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:41.038 [2024-11-29 14:31:22.675390] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:41.038 [2024-11-29 14:31:22.675420] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:41.038 [2024-11-29 14:31:22.675439] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:41.038 [2024-11-29 14:31:22.675533] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:41.038 [2024-11-29 14:31:22.675544] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:41.038 [2024-11-29 14:31:22.675555] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:41.038 [2024-11-29 14:31:22.675564] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:41.038 [2024-11-29 14:31:22.675572] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:41.038 [2024-11-29 14:31:22.675578] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:41.038 [2024-11-29 14:31:22.675584] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:41.038 [2024-11-29 14:31:22.675593] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:41.038 [2024-11-29 14:31:22.675600] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:41.038 [2024-11-29 14:31:22.675606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.038 [2024-11-29 14:31:22.675616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:41.038 [2024-11-29 14:31:22.675622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:23:41.038 [2024-11-29 14:31:22.675630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.038 [2024-11-29 14:31:22.675693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.038 [2024-11-29 14:31:22.675700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:41.038 [2024-11-29 14:31:22.675706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:41.038 [2024-11-29 14:31:22.675716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.038 [2024-11-29 14:31:22.675791] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:41.038 [2024-11-29 14:31:22.675804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:41.038 [2024-11-29 14:31:22.675812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:41.038 [2024-11-29 14:31:22.675818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:41.038 [2024-11-29 14:31:22.675827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:41.038 [2024-11-29 14:31:22.675832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:41.038 [2024-11-29 14:31:22.675838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:41.038 [2024-11-29 14:31:22.675843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:41.038 [2024-11-29 14:31:22.675848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:41.038 [2024-11-29 14:31:22.675853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:41.038 [2024-11-29 14:31:22.675859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:41.038 [2024-11-29 14:31:22.675864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:41.038 [2024-11-29 14:31:22.675869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:41.038 [2024-11-29 14:31:22.675879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:41.038 [2024-11-29 14:31:22.675884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:41.038 [2024-11-29 14:31:22.675889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:41.038 [2024-11-29 14:31:22.675894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:41.038 [2024-11-29 14:31:22.675903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:41.038 [2024-11-29 14:31:22.675909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:41.038 [2024-11-29 14:31:22.675916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:41.038 [2024-11-29 14:31:22.675922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:41.038 [2024-11-29 14:31:22.675927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:41.038 [2024-11-29 14:31:22.675933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:41.038 [2024-11-29 14:31:22.675939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:41.038 [2024-11-29 14:31:22.675945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:41.038 [2024-11-29 14:31:22.675951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:41.038 [2024-11-29 14:31:22.675956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:41.038 [2024-11-29 14:31:22.675962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:41.038 [2024-11-29 14:31:22.675968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:41.038 [2024-11-29 14:31:22.675973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:41.038 [2024-11-29 14:31:22.675979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:41.038 [2024-11-29 14:31:22.675985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:41.038 [2024-11-29 14:31:22.675991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:41.038 [2024-11-29 14:31:22.675998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:41.038 [2024-11-29 14:31:22.676003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:41.038 [2024-11-29 14:31:22.676010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:41.038 [2024-11-29 14:31:22.676016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:41.038 [2024-11-29 14:31:22.676021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:41.038 [2024-11-29 14:31:22.676027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:41.038 [2024-11-29 14:31:22.676034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:41.038 [2024-11-29 14:31:22.676040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:41.038 [2024-11-29 14:31:22.676045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:41.038 [2024-11-29 14:31:22.676050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:41.038 [2024-11-29 14:31:22.676056] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:41.038 [2024-11-29 14:31:22.676063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:41.038 [2024-11-29 14:31:22.676073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:41.038 [2024-11-29 14:31:22.676079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:41.038 [2024-11-29 14:31:22.676086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:41.038 [2024-11-29 14:31:22.676092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:41.038 [2024-11-29 14:31:22.676099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:41.038 [2024-11-29 14:31:22.676105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:41.038 [2024-11-29 14:31:22.676117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:41.038 [2024-11-29 14:31:22.676124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:41.038 [2024-11-29 14:31:22.676131] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:41.038 [2024-11-29 14:31:22.676139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:41.038 [2024-11-29 14:31:22.676146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:41.038 [2024-11-29 14:31:22.676152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:41.038 [2024-11-29 14:31:22.676158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:41.038 [2024-11-29 14:31:22.676165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:41.038 [2024-11-29 14:31:22.676172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:41.038 [2024-11-29 14:31:22.676178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:41.038 [2024-11-29 14:31:22.676184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:41.038 [2024-11-29 14:31:22.676190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:41.038 [2024-11-29 14:31:22.676196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:41.039 [2024-11-29 14:31:22.676202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:41.039 [2024-11-29 14:31:22.676211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:41.039 [2024-11-29 14:31:22.676217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:41.039 [2024-11-29 14:31:22.676223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:41.039 [2024-11-29 14:31:22.676229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:41.039 [2024-11-29 14:31:22.676235] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:41.039 [2024-11-29 14:31:22.676244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:41.039 [2024-11-29 14:31:22.676254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:41.039 [2024-11-29 14:31:22.676261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:41.039 [2024-11-29 14:31:22.676267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:41.039 [2024-11-29 14:31:22.676273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:41.039 [2024-11-29 14:31:22.676279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.676288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:41.039 [2024-11-29 14:31:22.676297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:23:41.039 [2024-11-29 14:31:22.676303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.699390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.699429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:41.039 [2024-11-29 14:31:22.699442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.049 ms 00:23:41.039 [2024-11-29 14:31:22.699449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.699531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.699539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:41.039 [2024-11-29 14:31:22.699548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:41.039 [2024-11-29 14:31:22.699559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.708878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.708905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:41.039 [2024-11-29 14:31:22.708914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.273 ms 00:23:41.039 [2024-11-29 14:31:22.708923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.708948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.708958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:41.039 [2024-11-29 14:31:22.708965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:41.039 [2024-11-29 14:31:22.708972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.709371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.709396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:41.039 [2024-11-29 14:31:22.709403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:23:41.039 [2024-11-29 14:31:22.709409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.709542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.709554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:41.039 [2024-11-29 14:31:22.709564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:23:41.039 [2024-11-29 14:31:22.709574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.715050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.715079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:41.039 [2024-11-29 14:31:22.715091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.458 ms 00:23:41.039 [2024-11-29 14:31:22.715098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.718141] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:41.039 [2024-11-29 14:31:22.718169] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:41.039 [2024-11-29 14:31:22.718179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.718186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:41.039 [2024-11-29 14:31:22.718192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:23:41.039 [2024-11-29 14:31:22.718198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.732156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.732184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:41.039 [2024-11-29 14:31:22.732194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.927 ms 00:23:41.039 [2024-11-29 14:31:22.732201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.733815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.733845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:41.039 [2024-11-29 14:31:22.733853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.575 ms 00:23:41.039 [2024-11-29 14:31:22.733859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.735180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.735207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:41.039 [2024-11-29 14:31:22.735214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:23:41.039 [2024-11-29 14:31:22.735220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.735467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.735500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:41.039 [2024-11-29 14:31:22.735510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:23:41.039 [2024-11-29 14:31:22.735517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.753724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.753756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:41.039 [2024-11-29 14:31:22.753765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.189 ms 00:23:41.039 [2024-11-29 14:31:22.753771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.759614] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:41.039 [2024-11-29 14:31:22.762073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.762104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:41.039 [2024-11-29 14:31:22.762113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.270 ms 00:23:41.039 [2024-11-29 14:31:22.762120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.762167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.762175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:41.039 [2024-11-29 14:31:22.762186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:41.039 [2024-11-29 14:31:22.762193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.762275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.762285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:41.039 [2024-11-29 14:31:22.762292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:41.039 [2024-11-29 14:31:22.762298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.762314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.762320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:41.039 [2024-11-29 14:31:22.762330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:41.039 [2024-11-29 14:31:22.762336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.762364] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:41.039 [2024-11-29 14:31:22.762374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.762381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:41.039 [2024-11-29 14:31:22.762387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:41.039 [2024-11-29 14:31:22.762392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.765874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.765902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:41.039 [2024-11-29 14:31:22.765910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.469 ms 00:23:41.039 [2024-11-29 14:31:22.765917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.765976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.039 [2024-11-29 14:31:22.765990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:41.039 [2024-11-29 14:31:22.765996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:41.039 [2024-11-29 14:31:22.766003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.039 [2024-11-29 14:31:22.766883] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.918 ms, result 0 00:23:42.419  [2024-11-29T14:31:24.783Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-29T14:31:26.165Z] Copying: 37/1024 [MB] (17 MBps) [2024-11-29T14:31:27.107Z] Copying: 48/1024 [MB] (11 MBps) [2024-11-29T14:31:28.043Z] Copying: 59/1024 [MB] (10 MBps) [2024-11-29T14:31:28.983Z] Copying: 70/1024 [MB] (11 MBps) [2024-11-29T14:31:29.916Z] Copying: 81/1024 [MB] (10 MBps) [2024-11-29T14:31:30.851Z] Copying: 92/1024 [MB] (11 MBps) [2024-11-29T14:31:31.785Z] Copying: 103/1024 [MB] (11 MBps) [2024-11-29T14:31:33.162Z] Copying: 114/1024 [MB] (11 MBps) [2024-11-29T14:31:34.105Z] Copying: 126/1024 [MB] (11 MBps) [2024-11-29T14:31:35.037Z] Copying: 136/1024 [MB] (10 MBps) [2024-11-29T14:31:35.990Z] Copying: 148/1024 [MB] (11 MBps) [2024-11-29T14:31:36.928Z] Copying: 160/1024 [MB] (11 MBps) [2024-11-29T14:31:37.905Z] Copying: 171/1024 [MB] (11 MBps) [2024-11-29T14:31:38.841Z] Copying: 183/1024 [MB] (11 MBps) [2024-11-29T14:31:40.219Z] Copying: 194/1024 [MB] (11 MBps) [2024-11-29T14:31:40.791Z] Copying: 206/1024 [MB] (11 MBps) [2024-11-29T14:31:42.166Z] Copying: 217/1024 [MB] (10 MBps) [2024-11-29T14:31:43.102Z] Copying: 228/1024 [MB] (11 MBps) [2024-11-29T14:31:44.037Z] Copying: 240/1024 [MB] (11 MBps) [2024-11-29T14:31:44.972Z] Copying: 251/1024 [MB] (11 MBps) [2024-11-29T14:31:45.905Z] Copying: 263/1024 [MB] (11 MBps) [2024-11-29T14:31:46.840Z] Copying: 274/1024 [MB] (11 MBps) [2024-11-29T14:31:48.217Z] Copying: 285/1024 [MB] (11 MBps) [2024-11-29T14:31:48.784Z] Copying: 296/1024 [MB] (11 MBps) [2024-11-29T14:31:50.172Z] Copying: 308/1024 [MB] (11 MBps) [2024-11-29T14:31:51.109Z] Copying: 318/1024 [MB] (10 MBps) [2024-11-29T14:31:52.046Z] Copying: 329/1024 [MB] (10 MBps) [2024-11-29T14:31:52.982Z] Copying: 341/1024 [MB] (11 MBps) [2024-11-29T14:31:53.918Z] Copying: 352/1024 [MB] (11 MBps) [2024-11-29T14:31:54.849Z] Copying: 364/1024 [MB] (11 MBps) [2024-11-29T14:31:55.782Z] Copying: 375/1024 [MB] (11 MBps) [2024-11-29T14:31:57.157Z] Copying: 387/1024 [MB] (11 MBps) [2024-11-29T14:31:58.090Z] Copying: 398/1024 [MB] (11 MBps) [2024-11-29T14:31:59.026Z] Copying: 410/1024 [MB] (11 MBps) [2024-11-29T14:31:59.963Z] Copying: 421/1024 [MB] (11 MBps) [2024-11-29T14:32:00.899Z] Copying: 432/1024 [MB] (10 MBps) [2024-11-29T14:32:01.834Z] Copying: 443/1024 [MB] (11 MBps) [2024-11-29T14:32:03.215Z] Copying: 455/1024 [MB] (11 MBps) [2024-11-29T14:32:03.782Z] Copying: 466/1024 [MB] (11 MBps) [2024-11-29T14:32:05.157Z] Copying: 477/1024 [MB] (10 MBps) [2024-11-29T14:32:06.091Z] Copying: 488/1024 [MB] (11 MBps) [2024-11-29T14:32:07.026Z] Copying: 500/1024 [MB] (11 MBps) [2024-11-29T14:32:07.962Z] Copying: 511/1024 [MB] (11 MBps) [2024-11-29T14:32:08.907Z] Copying: 523/1024 [MB] (11 MBps) [2024-11-29T14:32:09.881Z] Copying: 534/1024 [MB] (11 MBps) [2024-11-29T14:32:10.815Z] Copying: 546/1024 [MB] (11 MBps) [2024-11-29T14:32:12.190Z] Copying: 557/1024 [MB] (11 MBps) [2024-11-29T14:32:13.130Z] Copying: 568/1024 [MB] (11 MBps) [2024-11-29T14:32:14.066Z] Copying: 578/1024 [MB] (10 MBps) [2024-11-29T14:32:15.001Z] Copying: 590/1024 [MB] (11 MBps) [2024-11-29T14:32:15.943Z] Copying: 601/1024 [MB] (11 MBps) [2024-11-29T14:32:16.881Z] Copying: 613/1024 [MB] (11 MBps) [2024-11-29T14:32:17.815Z] Copying: 623/1024 [MB] (10 MBps) [2024-11-29T14:32:19.190Z] Copying: 634/1024 [MB] (11 MBps) [2024-11-29T14:32:20.128Z] Copying: 646/1024 [MB] (11 MBps) [2024-11-29T14:32:21.071Z] Copying: 657/1024 [MB] (11 MBps) [2024-11-29T14:32:22.013Z] Copying: 667/1024 [MB] (10 MBps) [2024-11-29T14:32:22.947Z] Copying: 678/1024 [MB] (10 MBps) [2024-11-29T14:32:23.882Z] Copying: 689/1024 [MB] (11 MBps) [2024-11-29T14:32:24.825Z] Copying: 700/1024 [MB] (11 MBps) [2024-11-29T14:32:26.204Z] Copying: 711/1024 [MB] (10 MBps) [2024-11-29T14:32:27.138Z] Copying: 721/1024 [MB] (10 MBps) [2024-11-29T14:32:28.074Z] Copying: 733/1024 [MB] (11 MBps) [2024-11-29T14:32:29.008Z] Copying: 745/1024 [MB] (12 MBps) [2024-11-29T14:32:29.942Z] Copying: 757/1024 [MB] (11 MBps) [2024-11-29T14:32:30.878Z] Copying: 769/1024 [MB] (11 MBps) [2024-11-29T14:32:31.817Z] Copying: 780/1024 [MB] (11 MBps) [2024-11-29T14:32:33.203Z] Copying: 791/1024 [MB] (11 MBps) [2024-11-29T14:32:34.137Z] Copying: 801/1024 [MB] (10 MBps) [2024-11-29T14:32:35.071Z] Copying: 812/1024 [MB] (11 MBps) [2024-11-29T14:32:36.005Z] Copying: 824/1024 [MB] (11 MBps) [2024-11-29T14:32:36.941Z] Copying: 836/1024 [MB] (12 MBps) [2024-11-29T14:32:37.881Z] Copying: 847/1024 [MB] (11 MBps) [2024-11-29T14:32:38.821Z] Copying: 858/1024 [MB] (10 MBps) [2024-11-29T14:32:40.247Z] Copying: 869/1024 [MB] (10 MBps) [2024-11-29T14:32:40.813Z] Copying: 880/1024 [MB] (11 MBps) [2024-11-29T14:32:42.194Z] Copying: 891/1024 [MB] (11 MBps) [2024-11-29T14:32:43.130Z] Copying: 907/1024 [MB] (15 MBps) [2024-11-29T14:32:44.069Z] Copying: 918/1024 [MB] (11 MBps) [2024-11-29T14:32:45.010Z] Copying: 929/1024 [MB] (10 MBps) [2024-11-29T14:32:45.954Z] Copying: 939/1024 [MB] (10 MBps) [2024-11-29T14:32:46.889Z] Copying: 949/1024 [MB] (10 MBps) [2024-11-29T14:32:47.825Z] Copying: 961/1024 [MB] (11 MBps) [2024-11-29T14:32:49.202Z] Copying: 972/1024 [MB] (11 MBps) [2024-11-29T14:32:50.138Z] Copying: 984/1024 [MB] (11 MBps) [2024-11-29T14:32:51.079Z] Copying: 995/1024 [MB] (11 MBps) [2024-11-29T14:32:52.013Z] Copying: 1005/1024 [MB] (10 MBps) [2024-11-29T14:32:52.584Z] Copying: 1016/1024 [MB] (11 MBps) [2024-11-29T14:32:52.584Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-29 14:32:52.419700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-11-29 14:32:52.419746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:10.790 [2024-11-29 14:32:52.419759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:10.790 [2024-11-29 14:32:52.419765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-11-29 14:32:52.419782] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:10.790 [2024-11-29 14:32:52.420300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-11-29 14:32:52.420327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:10.790 [2024-11-29 14:32:52.420334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:25:10.790 [2024-11-29 14:32:52.420340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-11-29 14:32:52.422794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-11-29 14:32:52.422824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:10.790 [2024-11-29 14:32:52.422832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.439 ms 00:25:10.790 [2024-11-29 14:32:52.422838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-11-29 14:32:52.437805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-11-29 14:32:52.437830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:10.790 [2024-11-29 14:32:52.437839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.955 ms 00:25:10.790 [2024-11-29 14:32:52.437849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-11-29 14:32:52.442468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-11-29 14:32:52.442500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:10.790 [2024-11-29 14:32:52.442509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.600 ms 00:25:10.790 [2024-11-29 14:32:52.442519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-11-29 14:32:52.444534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-11-29 14:32:52.444560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:10.790 [2024-11-29 14:32:52.444567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.985 ms 00:25:10.790 [2024-11-29 14:32:52.444573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-11-29 14:32:52.448554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-11-29 14:32:52.448579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:10.790 [2024-11-29 14:32:52.448590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.958 ms 00:25:10.790 [2024-11-29 14:32:52.448596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-11-29 14:32:52.451339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-11-29 14:32:52.451368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:10.790 [2024-11-29 14:32:52.451375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:25:10.790 [2024-11-29 14:32:52.451382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-11-29 14:32:52.453967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-11-29 14:32:52.453991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:10.790 [2024-11-29 14:32:52.453997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 00:25:10.790 [2024-11-29 14:32:52.454003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.790 [2024-11-29 14:32:52.456115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.790 [2024-11-29 14:32:52.456139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:10.790 [2024-11-29 14:32:52.456145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:25:10.790 [2024-11-29 14:32:52.456150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.791 [2024-11-29 14:32:52.457650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.791 [2024-11-29 14:32:52.457673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:10.791 [2024-11-29 14:32:52.457680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.478 ms 00:25:10.791 [2024-11-29 14:32:52.457685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.791 [2024-11-29 14:32:52.459222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.791 [2024-11-29 14:32:52.459246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:10.791 [2024-11-29 14:32:52.459253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.498 ms 00:25:10.791 [2024-11-29 14:32:52.459258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.791 [2024-11-29 14:32:52.459279] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:10.791 [2024-11-29 14:32:52.459290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 1024 / 261120 wr_cnt: 1 state: open 00:25:10.791 [2024-11-29 14:32:52.459298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:10.791 [2024-11-29 14:32:52.459673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:10.792 [2024-11-29 14:32:52.459905] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:10.792 [2024-11-29 14:32:52.459912] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ee36d1dd-8366-42ba-9e96-84f84953a1f1 00:25:10.792 [2024-11-29 14:32:52.459923] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 1024 00:25:10.792 [2024-11-29 14:32:52.459932] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1984 00:25:10.792 [2024-11-29 14:32:52.459938] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1024 00:25:10.792 [2024-11-29 14:32:52.459945] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.9375 00:25:10.792 [2024-11-29 14:32:52.459951] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:10.792 [2024-11-29 14:32:52.459957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:10.792 [2024-11-29 14:32:52.459964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:10.792 [2024-11-29 14:32:52.459969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:10.792 [2024-11-29 14:32:52.459974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:10.792 [2024-11-29 14:32:52.459984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-11-29 14:32:52.459991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:10.792 [2024-11-29 14:32:52.459997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:25:10.792 [2024-11-29 14:32:52.460005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-11-29 14:32:52.461718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-11-29 14:32:52.461738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:10.792 [2024-11-29 14:32:52.461745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:25:10.792 [2024-11-29 14:32:52.461752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-11-29 14:32:52.461840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.792 [2024-11-29 14:32:52.461847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:10.792 [2024-11-29 14:32:52.461857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:10.792 [2024-11-29 14:32:52.461865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-11-29 14:32:52.466908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.792 [2024-11-29 14:32:52.466933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:10.792 [2024-11-29 14:32:52.466940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.792 [2024-11-29 14:32:52.466947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-11-29 14:32:52.466987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.792 [2024-11-29 14:32:52.466994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:10.792 [2024-11-29 14:32:52.467004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.792 [2024-11-29 14:32:52.467009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-11-29 14:32:52.467076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.792 [2024-11-29 14:32:52.467085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:10.792 [2024-11-29 14:32:52.467092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.792 [2024-11-29 14:32:52.467102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.792 [2024-11-29 14:32:52.467114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.793 [2024-11-29 14:32:52.467120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:10.793 [2024-11-29 14:32:52.467126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.793 [2024-11-29 14:32:52.467131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.793 [2024-11-29 14:32:52.477418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.793 [2024-11-29 14:32:52.477447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:10.793 [2024-11-29 14:32:52.477460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.793 [2024-11-29 14:32:52.477468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.793 [2024-11-29 14:32:52.486138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.793 [2024-11-29 14:32:52.486170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:10.793 [2024-11-29 14:32:52.486179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.793 [2024-11-29 14:32:52.486191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.793 [2024-11-29 14:32:52.486232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.793 [2024-11-29 14:32:52.486240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:10.793 [2024-11-29 14:32:52.486246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.793 [2024-11-29 14:32:52.486252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.793 [2024-11-29 14:32:52.486272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.793 [2024-11-29 14:32:52.486279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:10.793 [2024-11-29 14:32:52.486285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.793 [2024-11-29 14:32:52.486291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.793 [2024-11-29 14:32:52.486349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.793 [2024-11-29 14:32:52.486358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:10.793 [2024-11-29 14:32:52.486364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.793 [2024-11-29 14:32:52.486369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.793 [2024-11-29 14:32:52.486392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.793 [2024-11-29 14:32:52.486399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:10.793 [2024-11-29 14:32:52.486405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.793 [2024-11-29 14:32:52.486411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.793 [2024-11-29 14:32:52.486450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.793 [2024-11-29 14:32:52.486457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:10.793 [2024-11-29 14:32:52.486464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.793 [2024-11-29 14:32:52.486469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.793 [2024-11-29 14:32:52.486518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.793 [2024-11-29 14:32:52.486528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:10.793 [2024-11-29 14:32:52.486534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.793 [2024-11-29 14:32:52.486540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.793 [2024-11-29 14:32:52.486645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.917 ms, result 0 00:25:11.361 00:25:11.361 00:25:11.361 14:32:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:13.902 14:32:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:13.903 [2024-11-29 14:32:55.131697] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:13.903 [2024-11-29 14:32:55.131812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90950 ] 00:25:13.903 [2024-11-29 14:32:55.281318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.903 [2024-11-29 14:32:55.335889] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.903 [2024-11-29 14:32:55.435546] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:13.903 [2024-11-29 14:32:55.435597] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:13.903 [2024-11-29 14:32:55.586468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.903 [2024-11-29 14:32:55.586512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:13.903 [2024-11-29 14:32:55.586526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:13.903 [2024-11-29 14:32:55.586532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.903 [2024-11-29 14:32:55.586569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.903 [2024-11-29 14:32:55.586578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:13.903 [2024-11-29 14:32:55.586584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:13.903 [2024-11-29 14:32:55.586590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.903 [2024-11-29 14:32:55.586604] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:13.903 [2024-11-29 14:32:55.586785] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:13.903 [2024-11-29 14:32:55.586799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.903 [2024-11-29 14:32:55.586805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:13.903 [2024-11-29 14:32:55.586813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:25:13.903 [2024-11-29 14:32:55.586820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.903 [2024-11-29 14:32:55.588062] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:13.903 [2024-11-29 14:32:55.590839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.903 [2024-11-29 14:32:55.590867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:13.903 [2024-11-29 14:32:55.590875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.778 ms 00:25:13.903 [2024-11-29 14:32:55.590882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.903 [2024-11-29 14:32:55.590934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.903 [2024-11-29 14:32:55.590945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:13.903 [2024-11-29 14:32:55.590956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:13.903 [2024-11-29 14:32:55.590962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.903 [2024-11-29 14:32:55.597062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.903 [2024-11-29 14:32:55.597087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:13.903 [2024-11-29 14:32:55.597099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.060 ms 00:25:13.903 [2024-11-29 14:32:55.597108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.903 [2024-11-29 14:32:55.597176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.903 [2024-11-29 14:32:55.597186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:13.903 [2024-11-29 14:32:55.597193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:13.903 [2024-11-29 14:32:55.597199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.903 [2024-11-29 14:32:55.597237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.903 [2024-11-29 14:32:55.597245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:13.903 [2024-11-29 14:32:55.597252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:13.903 [2024-11-29 14:32:55.597258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.903 [2024-11-29 14:32:55.597279] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:13.903 [2024-11-29 14:32:55.598787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.903 [2024-11-29 14:32:55.598813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:13.903 [2024-11-29 14:32:55.598820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.515 ms 00:25:13.903 [2024-11-29 14:32:55.598826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.903 [2024-11-29 14:32:55.598853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.903 [2024-11-29 14:32:55.598863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:13.903 [2024-11-29 14:32:55.598872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:13.903 [2024-11-29 14:32:55.598878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.903 [2024-11-29 14:32:55.598893] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:13.903 [2024-11-29 14:32:55.598912] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:13.903 [2024-11-29 14:32:55.598944] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:13.903 [2024-11-29 14:32:55.598957] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:13.903 [2024-11-29 14:32:55.599054] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:13.903 [2024-11-29 14:32:55.599064] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:13.903 [2024-11-29 14:32:55.599072] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:13.903 [2024-11-29 14:32:55.599081] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:13.903 [2024-11-29 14:32:55.599091] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:13.903 [2024-11-29 14:32:55.599097] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:13.903 [2024-11-29 14:32:55.599102] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:13.903 [2024-11-29 14:32:55.599108] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:13.903 [2024-11-29 14:32:55.599114] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:13.903 [2024-11-29 14:32:55.599120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.903 [2024-11-29 14:32:55.599126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:13.903 [2024-11-29 14:32:55.599132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:25:13.903 [2024-11-29 14:32:55.599138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.903 [2024-11-29 14:32:55.599203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.903 [2024-11-29 14:32:55.599212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:13.903 [2024-11-29 14:32:55.599218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:13.903 [2024-11-29 14:32:55.599227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.903 [2024-11-29 14:32:55.599299] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:13.903 [2024-11-29 14:32:55.599314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:13.903 [2024-11-29 14:32:55.599321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:13.903 [2024-11-29 14:32:55.599333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.903 [2024-11-29 14:32:55.599340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:13.903 [2024-11-29 14:32:55.599345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:13.903 [2024-11-29 14:32:55.599351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:13.903 [2024-11-29 14:32:55.599357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:13.903 [2024-11-29 14:32:55.599363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:13.903 [2024-11-29 14:32:55.599368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:13.903 [2024-11-29 14:32:55.599375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:13.903 [2024-11-29 14:32:55.599382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:13.903 [2024-11-29 14:32:55.599388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:13.903 [2024-11-29 14:32:55.599396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:13.903 [2024-11-29 14:32:55.599402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:13.903 [2024-11-29 14:32:55.599408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.903 [2024-11-29 14:32:55.599414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:13.903 [2024-11-29 14:32:55.599421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:13.903 [2024-11-29 14:32:55.599427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.903 [2024-11-29 14:32:55.599433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:13.903 [2024-11-29 14:32:55.599439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:13.903 [2024-11-29 14:32:55.599445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.903 [2024-11-29 14:32:55.599452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:13.903 [2024-11-29 14:32:55.599458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:13.903 [2024-11-29 14:32:55.599464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.903 [2024-11-29 14:32:55.599470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:13.903 [2024-11-29 14:32:55.599476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:13.903 [2024-11-29 14:32:55.599488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.903 [2024-11-29 14:32:55.599504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:13.903 [2024-11-29 14:32:55.599510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:13.903 [2024-11-29 14:32:55.599516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.904 [2024-11-29 14:32:55.599522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:13.904 [2024-11-29 14:32:55.599528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:13.904 [2024-11-29 14:32:55.599534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:13.904 [2024-11-29 14:32:55.599540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:13.904 [2024-11-29 14:32:55.599546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:13.904 [2024-11-29 14:32:55.599552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:13.904 [2024-11-29 14:32:55.599558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:13.904 [2024-11-29 14:32:55.599565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:13.904 [2024-11-29 14:32:55.599571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.904 [2024-11-29 14:32:55.599578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:13.904 [2024-11-29 14:32:55.599584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:13.904 [2024-11-29 14:32:55.599590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.904 [2024-11-29 14:32:55.599598] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:13.904 [2024-11-29 14:32:55.599605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:13.904 [2024-11-29 14:32:55.599612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:13.904 [2024-11-29 14:32:55.599622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.904 [2024-11-29 14:32:55.599629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:13.904 [2024-11-29 14:32:55.599635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:13.904 [2024-11-29 14:32:55.599642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:13.904 [2024-11-29 14:32:55.599648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:13.904 [2024-11-29 14:32:55.599654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:13.904 [2024-11-29 14:32:55.599661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:13.904 [2024-11-29 14:32:55.599669] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:13.904 [2024-11-29 14:32:55.599678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:13.904 [2024-11-29 14:32:55.599685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:13.904 [2024-11-29 14:32:55.599692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:13.904 [2024-11-29 14:32:55.599699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:13.904 [2024-11-29 14:32:55.599706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:13.904 [2024-11-29 14:32:55.599714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:13.904 [2024-11-29 14:32:55.599721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:13.904 [2024-11-29 14:32:55.599728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:13.904 [2024-11-29 14:32:55.599735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:13.904 [2024-11-29 14:32:55.599741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:13.904 [2024-11-29 14:32:55.599748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:13.904 [2024-11-29 14:32:55.599754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:13.904 [2024-11-29 14:32:55.599762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:13.904 [2024-11-29 14:32:55.599768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:13.904 [2024-11-29 14:32:55.599775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:13.904 [2024-11-29 14:32:55.599781] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:13.904 [2024-11-29 14:32:55.599789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:13.904 [2024-11-29 14:32:55.599796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:13.904 [2024-11-29 14:32:55.599802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:13.904 [2024-11-29 14:32:55.599810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:13.904 [2024-11-29 14:32:55.599816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:13.904 [2024-11-29 14:32:55.599826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.904 [2024-11-29 14:32:55.599833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:13.904 [2024-11-29 14:32:55.599843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:25:13.904 [2024-11-29 14:32:55.599851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.904 [2024-11-29 14:32:55.620429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.904 [2024-11-29 14:32:55.620472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:13.904 [2024-11-29 14:32:55.620486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.545 ms 00:25:13.904 [2024-11-29 14:32:55.620506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.904 [2024-11-29 14:32:55.620595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.904 [2024-11-29 14:32:55.620610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:13.904 [2024-11-29 14:32:55.620620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:13.904 [2024-11-29 14:32:55.620627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.904 [2024-11-29 14:32:55.631098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.904 [2024-11-29 14:32:55.631129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:13.904 [2024-11-29 14:32:55.631142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.415 ms 00:25:13.904 [2024-11-29 14:32:55.631151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.904 [2024-11-29 14:32:55.631189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.904 [2024-11-29 14:32:55.631199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:13.904 [2024-11-29 14:32:55.631209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:13.904 [2024-11-29 14:32:55.631218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.904 [2024-11-29 14:32:55.631685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.904 [2024-11-29 14:32:55.631748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:13.904 [2024-11-29 14:32:55.631760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:25:13.904 [2024-11-29 14:32:55.631771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.904 [2024-11-29 14:32:55.631925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.904 [2024-11-29 14:32:55.631938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:13.904 [2024-11-29 14:32:55.631950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:25:13.904 [2024-11-29 14:32:55.631960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.904 [2024-11-29 14:32:55.637687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.904 [2024-11-29 14:32:55.637709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:13.904 [2024-11-29 14:32:55.637720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.704 ms 00:25:13.904 [2024-11-29 14:32:55.637730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.904 [2024-11-29 14:32:55.640621] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:25:13.904 [2024-11-29 14:32:55.640648] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:13.904 [2024-11-29 14:32:55.640657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.904 [2024-11-29 14:32:55.640664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:13.904 [2024-11-29 14:32:55.640671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.865 ms 00:25:13.904 [2024-11-29 14:32:55.640677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.904 [2024-11-29 14:32:55.652025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.904 [2024-11-29 14:32:55.652057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:13.904 [2024-11-29 14:32:55.652071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.318 ms 00:25:13.904 [2024-11-29 14:32:55.652078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.904 [2024-11-29 14:32:55.654042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.904 [2024-11-29 14:32:55.654067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:13.904 [2024-11-29 14:32:55.654074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.928 ms 00:25:13.904 [2024-11-29 14:32:55.654080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.904 [2024-11-29 14:32:55.655851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.904 [2024-11-29 14:32:55.655875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:13.904 [2024-11-29 14:32:55.655882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.746 ms 00:25:13.904 [2024-11-29 14:32:55.655887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.904 [2024-11-29 14:32:55.656144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.904 [2024-11-29 14:32:55.656154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:13.904 [2024-11-29 14:32:55.656161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:25:13.904 [2024-11-29 14:32:55.656166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.904 [2024-11-29 14:32:55.673823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.904 [2024-11-29 14:32:55.673859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:13.904 [2024-11-29 14:32:55.673868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.645 ms 00:25:13.905 [2024-11-29 14:32:55.673874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.905 [2024-11-29 14:32:55.679985] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:13.905 [2024-11-29 14:32:55.682235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.905 [2024-11-29 14:32:55.682260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:13.905 [2024-11-29 14:32:55.682274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.332 ms 00:25:13.905 [2024-11-29 14:32:55.682280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.905 [2024-11-29 14:32:55.682325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.905 [2024-11-29 14:32:55.682334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:13.905 [2024-11-29 14:32:55.682340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:13.905 [2024-11-29 14:32:55.682347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.905 [2024-11-29 14:32:55.682989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.905 [2024-11-29 14:32:55.683013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:13.905 [2024-11-29 14:32:55.683024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:25:13.905 [2024-11-29 14:32:55.683048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.905 [2024-11-29 14:32:55.683070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.905 [2024-11-29 14:32:55.683076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:13.905 [2024-11-29 14:32:55.683083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:13.905 [2024-11-29 14:32:55.683089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.905 [2024-11-29 14:32:55.683120] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:13.905 [2024-11-29 14:32:55.683129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.905 [2024-11-29 14:32:55.683135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:13.905 [2024-11-29 14:32:55.683142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:13.905 [2024-11-29 14:32:55.683148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.905 [2024-11-29 14:32:55.686949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.905 [2024-11-29 14:32:55.686975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:13.905 [2024-11-29 14:32:55.686983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.781 ms 00:25:13.905 [2024-11-29 14:32:55.686992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.905 [2024-11-29 14:32:55.687056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.905 [2024-11-29 14:32:55.687065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:13.905 [2024-11-29 14:32:55.687072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:13.905 [2024-11-29 14:32:55.687078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.905 [2024-11-29 14:32:55.688103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 101.273 ms, result 0 00:25:15.289  [2024-11-29T14:32:58.023Z] Copying: 956/1048576 [kB] (956 kBps) [2024-11-29T14:32:58.965Z] Copying: 2188/1048576 [kB] (1232 kBps) [2024-11-29T14:32:59.901Z] Copying: 5276/1048576 [kB] (3088 kBps) [2024-11-29T14:33:01.279Z] Copying: 26/1024 [MB] (21 MBps) [2024-11-29T14:33:02.213Z] Copying: 50/1024 [MB] (23 MBps) [2024-11-29T14:33:03.148Z] Copying: 76/1024 [MB] (26 MBps) [2024-11-29T14:33:04.097Z] Copying: 97/1024 [MB] (21 MBps) [2024-11-29T14:33:05.032Z] Copying: 118/1024 [MB] (20 MBps) [2024-11-29T14:33:05.979Z] Copying: 135/1024 [MB] (17 MBps) [2024-11-29T14:33:06.913Z] Copying: 153/1024 [MB] (18 MBps) [2024-11-29T14:33:08.287Z] Copying: 171/1024 [MB] (18 MBps) [2024-11-29T14:33:09.220Z] Copying: 189/1024 [MB] (17 MBps) [2024-11-29T14:33:10.157Z] Copying: 207/1024 [MB] (17 MBps) [2024-11-29T14:33:11.091Z] Copying: 229/1024 [MB] (21 MBps) [2024-11-29T14:33:12.093Z] Copying: 253/1024 [MB] (23 MBps) [2024-11-29T14:33:13.063Z] Copying: 271/1024 [MB] (18 MBps) [2024-11-29T14:33:13.998Z] Copying: 289/1024 [MB] (17 MBps) [2024-11-29T14:33:14.934Z] Copying: 307/1024 [MB] (17 MBps) [2024-11-29T14:33:16.311Z] Copying: 325/1024 [MB] (17 MBps) [2024-11-29T14:33:16.882Z] Copying: 342/1024 [MB] (17 MBps) [2024-11-29T14:33:18.259Z] Copying: 359/1024 [MB] (16 MBps) [2024-11-29T14:33:19.194Z] Copying: 376/1024 [MB] (17 MBps) [2024-11-29T14:33:20.130Z] Copying: 394/1024 [MB] (18 MBps) [2024-11-29T14:33:21.074Z] Copying: 412/1024 [MB] (17 MBps) [2024-11-29T14:33:22.018Z] Copying: 430/1024 [MB] (17 MBps) [2024-11-29T14:33:22.960Z] Copying: 445/1024 [MB] (15 MBps) [2024-11-29T14:33:23.903Z] Copying: 461/1024 [MB] (15 MBps) [2024-11-29T14:33:25.289Z] Copying: 478/1024 [MB] (16 MBps) [2024-11-29T14:33:26.231Z] Copying: 493/1024 [MB] (15 MBps) [2024-11-29T14:33:27.175Z] Copying: 509/1024 [MB] (15 MBps) [2024-11-29T14:33:28.120Z] Copying: 525/1024 [MB] (16 MBps) [2024-11-29T14:33:29.064Z] Copying: 562/1024 [MB] (37 MBps) [2024-11-29T14:33:30.009Z] Copying: 593/1024 [MB] (31 MBps) [2024-11-29T14:33:30.949Z] Copying: 614/1024 [MB] (20 MBps) [2024-11-29T14:33:31.892Z] Copying: 636/1024 [MB] (22 MBps) [2024-11-29T14:33:33.279Z] Copying: 668/1024 [MB] (31 MBps) [2024-11-29T14:33:34.223Z] Copying: 696/1024 [MB] (28 MBps) [2024-11-29T14:33:35.166Z] Copying: 716/1024 [MB] (19 MBps) [2024-11-29T14:33:36.106Z] Copying: 746/1024 [MB] (30 MBps) [2024-11-29T14:33:37.047Z] Copying: 768/1024 [MB] (21 MBps) [2024-11-29T14:33:37.993Z] Copying: 793/1024 [MB] (25 MBps) [2024-11-29T14:33:38.938Z] Copying: 822/1024 [MB] (29 MBps) [2024-11-29T14:33:39.882Z] Copying: 851/1024 [MB] (29 MBps) [2024-11-29T14:33:41.268Z] Copying: 875/1024 [MB] (24 MBps) [2024-11-29T14:33:42.213Z] Copying: 906/1024 [MB] (30 MBps) [2024-11-29T14:33:43.169Z] Copying: 941/1024 [MB] (34 MBps) [2024-11-29T14:33:44.225Z] Copying: 966/1024 [MB] (25 MBps) [2024-11-29T14:33:45.169Z] Copying: 989/1024 [MB] (22 MBps) [2024-11-29T14:33:45.169Z] Copying: 1019/1024 [MB] (29 MBps) [2024-11-29T14:33:45.740Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-29 14:33:45.565487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.946 [2024-11-29 14:33:45.565584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:03.946 [2024-11-29 14:33:45.565601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:03.946 [2024-11-29 14:33:45.565612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.946 [2024-11-29 14:33:45.565644] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:03.946 [2024-11-29 14:33:45.566536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.946 [2024-11-29 14:33:45.566566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:03.946 [2024-11-29 14:33:45.566587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:26:03.946 [2024-11-29 14:33:45.566598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.946 [2024-11-29 14:33:45.566842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.946 [2024-11-29 14:33:45.566866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:03.946 [2024-11-29 14:33:45.566883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:26:03.946 [2024-11-29 14:33:45.567000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.946 [2024-11-29 14:33:45.588521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.946 [2024-11-29 14:33:45.588579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:03.946 [2024-11-29 14:33:45.588594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.502 ms 00:26:03.946 [2024-11-29 14:33:45.588611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.946 [2024-11-29 14:33:45.595455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.946 [2024-11-29 14:33:45.595513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:03.946 [2024-11-29 14:33:45.595527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.806 ms 00:26:03.946 [2024-11-29 14:33:45.595538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.946 [2024-11-29 14:33:45.598211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.946 [2024-11-29 14:33:45.598263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:03.946 [2024-11-29 14:33:45.598277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:26:03.946 [2024-11-29 14:33:45.598285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.946 [2024-11-29 14:33:45.603771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.946 [2024-11-29 14:33:45.603839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:03.946 [2024-11-29 14:33:45.603851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.236 ms 00:26:03.946 [2024-11-29 14:33:45.603864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.946 [2024-11-29 14:33:45.606750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.946 [2024-11-29 14:33:45.606798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:03.946 [2024-11-29 14:33:45.606821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.836 ms 00:26:03.946 [2024-11-29 14:33:45.606829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.946 [2024-11-29 14:33:45.609284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.946 [2024-11-29 14:33:45.609337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:03.946 [2024-11-29 14:33:45.609348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:26:03.946 [2024-11-29 14:33:45.609355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.946 [2024-11-29 14:33:45.611522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.946 [2024-11-29 14:33:45.611586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:03.946 [2024-11-29 14:33:45.611596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.123 ms 00:26:03.946 [2024-11-29 14:33:45.611603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.946 [2024-11-29 14:33:45.613326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.946 [2024-11-29 14:33:45.613373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:03.946 [2024-11-29 14:33:45.613384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.679 ms 00:26:03.946 [2024-11-29 14:33:45.613391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.946 [2024-11-29 14:33:45.614927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.946 [2024-11-29 14:33:45.614974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:03.946 [2024-11-29 14:33:45.614984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.465 ms 00:26:03.946 [2024-11-29 14:33:45.614991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.946 [2024-11-29 14:33:45.615032] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:03.946 [2024-11-29 14:33:45.615061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:03.946 [2024-11-29 14:33:45.615073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:03.946 [2024-11-29 14:33:45.615082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:03.946 [2024-11-29 14:33:45.615091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:03.947 [2024-11-29 14:33:45.615797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:03.948 [2024-11-29 14:33:45.615805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:03.948 [2024-11-29 14:33:45.615813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:03.948 [2024-11-29 14:33:45.615822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:03.948 [2024-11-29 14:33:45.615831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:03.948 [2024-11-29 14:33:45.615839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:03.948 [2024-11-29 14:33:45.615849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:03.948 [2024-11-29 14:33:45.615857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:03.948 [2024-11-29 14:33:45.615866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:03.948 [2024-11-29 14:33:45.615882] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:03.948 [2024-11-29 14:33:45.615891] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ee36d1dd-8366-42ba-9e96-84f84953a1f1 00:26:03.948 [2024-11-29 14:33:45.615900] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:03.948 [2024-11-29 14:33:45.615922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263616 00:26:03.948 [2024-11-29 14:33:45.615929] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261632 00:26:03.948 [2024-11-29 14:33:45.615939] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:26:03.948 [2024-11-29 14:33:45.615947] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:03.948 [2024-11-29 14:33:45.615955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:03.948 [2024-11-29 14:33:45.615964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:03.948 [2024-11-29 14:33:45.615971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:03.948 [2024-11-29 14:33:45.615977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:03.948 [2024-11-29 14:33:45.615985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.948 [2024-11-29 14:33:45.615993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:03.948 [2024-11-29 14:33:45.616002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:26:03.948 [2024-11-29 14:33:45.616010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.618410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.948 [2024-11-29 14:33:45.618461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:03.948 [2024-11-29 14:33:45.618474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.381 ms 00:26:03.948 [2024-11-29 14:33:45.618483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.618634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.948 [2024-11-29 14:33:45.618645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:03.948 [2024-11-29 14:33:45.618654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:26:03.948 [2024-11-29 14:33:45.618665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.625782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.948 [2024-11-29 14:33:45.625830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:03.948 [2024-11-29 14:33:45.625842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.948 [2024-11-29 14:33:45.625850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.625911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.948 [2024-11-29 14:33:45.625920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:03.948 [2024-11-29 14:33:45.625929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.948 [2024-11-29 14:33:45.625946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.625991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.948 [2024-11-29 14:33:45.626006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:03.948 [2024-11-29 14:33:45.626015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.948 [2024-11-29 14:33:45.626023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.626038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.948 [2024-11-29 14:33:45.626046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:03.948 [2024-11-29 14:33:45.626054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.948 [2024-11-29 14:33:45.626061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.639545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.948 [2024-11-29 14:33:45.639591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:03.948 [2024-11-29 14:33:45.639603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.948 [2024-11-29 14:33:45.639611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.649635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.948 [2024-11-29 14:33:45.649681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:03.948 [2024-11-29 14:33:45.649704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.948 [2024-11-29 14:33:45.649712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.649763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.948 [2024-11-29 14:33:45.649772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:03.948 [2024-11-29 14:33:45.649785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.948 [2024-11-29 14:33:45.649793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.649828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.948 [2024-11-29 14:33:45.649838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:03.948 [2024-11-29 14:33:45.649845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.948 [2024-11-29 14:33:45.649853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.649919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.948 [2024-11-29 14:33:45.649934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:03.948 [2024-11-29 14:33:45.649942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.948 [2024-11-29 14:33:45.649949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.649986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.948 [2024-11-29 14:33:45.649996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:03.948 [2024-11-29 14:33:45.650004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.948 [2024-11-29 14:33:45.650011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.650049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.948 [2024-11-29 14:33:45.650062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:03.948 [2024-11-29 14:33:45.650070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.948 [2024-11-29 14:33:45.650078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.650124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.948 [2024-11-29 14:33:45.650135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:03.948 [2024-11-29 14:33:45.650143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.948 [2024-11-29 14:33:45.650150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.948 [2024-11-29 14:33:45.650281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 84.765 ms, result 0 00:26:04.207 00:26:04.207 00:26:04.207 14:33:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:06.748 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:06.748 14:33:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:06.748 [2024-11-29 14:33:48.119624] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:06.748 [2024-11-29 14:33:48.119758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91490 ] 00:26:06.748 [2024-11-29 14:33:48.271410] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.748 [2024-11-29 14:33:48.308980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.748 [2024-11-29 14:33:48.396958] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:06.748 [2024-11-29 14:33:48.397019] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:07.010 [2024-11-29 14:33:48.554969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.010 [2024-11-29 14:33:48.555008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:07.010 [2024-11-29 14:33:48.555023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:07.010 [2024-11-29 14:33:48.555033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.010 [2024-11-29 14:33:48.555088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.010 [2024-11-29 14:33:48.555099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:07.010 [2024-11-29 14:33:48.555107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:07.010 [2024-11-29 14:33:48.555118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.010 [2024-11-29 14:33:48.555141] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:07.010 [2024-11-29 14:33:48.555875] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:07.010 [2024-11-29 14:33:48.555914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.010 [2024-11-29 14:33:48.555924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:07.010 [2024-11-29 14:33:48.555934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:26:07.010 [2024-11-29 14:33:48.555944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.010 [2024-11-29 14:33:48.557100] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:07.010 [2024-11-29 14:33:48.559832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.010 [2024-11-29 14:33:48.559863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:07.010 [2024-11-29 14:33:48.559873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.733 ms 00:26:07.010 [2024-11-29 14:33:48.559880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.010 [2024-11-29 14:33:48.559942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.010 [2024-11-29 14:33:48.559953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:07.010 [2024-11-29 14:33:48.559962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:07.010 [2024-11-29 14:33:48.559969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.010 [2024-11-29 14:33:48.565048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.010 [2024-11-29 14:33:48.565076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:07.010 [2024-11-29 14:33:48.565086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.029 ms 00:26:07.010 [2024-11-29 14:33:48.565096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.010 [2024-11-29 14:33:48.565176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.010 [2024-11-29 14:33:48.565188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:07.010 [2024-11-29 14:33:48.565196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:07.010 [2024-11-29 14:33:48.565204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.010 [2024-11-29 14:33:48.565244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.010 [2024-11-29 14:33:48.565259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:07.010 [2024-11-29 14:33:48.565267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:07.010 [2024-11-29 14:33:48.565274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.010 [2024-11-29 14:33:48.565296] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:07.010 [2024-11-29 14:33:48.566690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.010 [2024-11-29 14:33:48.566711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:07.010 [2024-11-29 14:33:48.566720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.401 ms 00:26:07.010 [2024-11-29 14:33:48.566727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.010 [2024-11-29 14:33:48.566759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.010 [2024-11-29 14:33:48.566766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:07.010 [2024-11-29 14:33:48.566774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:07.010 [2024-11-29 14:33:48.566780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.010 [2024-11-29 14:33:48.566798] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:07.010 [2024-11-29 14:33:48.566823] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:07.010 [2024-11-29 14:33:48.566860] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:07.010 [2024-11-29 14:33:48.566874] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:07.010 [2024-11-29 14:33:48.566975] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:07.010 [2024-11-29 14:33:48.566985] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:07.010 [2024-11-29 14:33:48.566995] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:07.010 [2024-11-29 14:33:48.567005] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:07.010 [2024-11-29 14:33:48.567016] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:07.010 [2024-11-29 14:33:48.567024] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:07.010 [2024-11-29 14:33:48.567034] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:07.010 [2024-11-29 14:33:48.567041] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:07.010 [2024-11-29 14:33:48.567058] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:07.010 [2024-11-29 14:33:48.567066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.010 [2024-11-29 14:33:48.567076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:07.010 [2024-11-29 14:33:48.567083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:26:07.011 [2024-11-29 14:33:48.567090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.011 [2024-11-29 14:33:48.567173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.011 [2024-11-29 14:33:48.567183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:07.011 [2024-11-29 14:33:48.567194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:07.011 [2024-11-29 14:33:48.567201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.011 [2024-11-29 14:33:48.567295] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:07.011 [2024-11-29 14:33:48.567308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:07.011 [2024-11-29 14:33:48.567316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:07.011 [2024-11-29 14:33:48.567328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.011 [2024-11-29 14:33:48.567336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:07.011 [2024-11-29 14:33:48.567344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:07.011 [2024-11-29 14:33:48.567352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:07.011 [2024-11-29 14:33:48.567360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:07.011 [2024-11-29 14:33:48.567368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:07.011 [2024-11-29 14:33:48.567376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:07.011 [2024-11-29 14:33:48.567384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:07.011 [2024-11-29 14:33:48.567396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:07.011 [2024-11-29 14:33:48.567404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:07.011 [2024-11-29 14:33:48.567411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:07.011 [2024-11-29 14:33:48.567419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:07.011 [2024-11-29 14:33:48.567427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.011 [2024-11-29 14:33:48.567435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:07.011 [2024-11-29 14:33:48.567442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:07.011 [2024-11-29 14:33:48.567450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.011 [2024-11-29 14:33:48.567458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:07.011 [2024-11-29 14:33:48.567465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:07.011 [2024-11-29 14:33:48.567473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:07.011 [2024-11-29 14:33:48.567480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:07.011 [2024-11-29 14:33:48.567487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:07.011 [2024-11-29 14:33:48.567507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:07.011 [2024-11-29 14:33:48.567515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:07.011 [2024-11-29 14:33:48.567522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:07.011 [2024-11-29 14:33:48.567534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:07.011 [2024-11-29 14:33:48.567542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:07.011 [2024-11-29 14:33:48.567549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:07.011 [2024-11-29 14:33:48.567556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:07.011 [2024-11-29 14:33:48.567564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:07.011 [2024-11-29 14:33:48.567572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:07.011 [2024-11-29 14:33:48.567579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:07.011 [2024-11-29 14:33:48.567586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:07.011 [2024-11-29 14:33:48.567594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:07.011 [2024-11-29 14:33:48.567601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:07.011 [2024-11-29 14:33:48.567610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:07.011 [2024-11-29 14:33:48.567617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:07.011 [2024-11-29 14:33:48.567625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.011 [2024-11-29 14:33:48.567633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:07.011 [2024-11-29 14:33:48.567641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:07.011 [2024-11-29 14:33:48.567648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.011 [2024-11-29 14:33:48.567659] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:07.011 [2024-11-29 14:33:48.567667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:07.011 [2024-11-29 14:33:48.567676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:07.011 [2024-11-29 14:33:48.567775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.011 [2024-11-29 14:33:48.567784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:07.011 [2024-11-29 14:33:48.567792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:07.011 [2024-11-29 14:33:48.567800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:07.011 [2024-11-29 14:33:48.567807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:07.011 [2024-11-29 14:33:48.567814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:07.011 [2024-11-29 14:33:48.567820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:07.011 [2024-11-29 14:33:48.567829] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:07.011 [2024-11-29 14:33:48.567838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:07.011 [2024-11-29 14:33:48.567847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:07.011 [2024-11-29 14:33:48.567854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:07.011 [2024-11-29 14:33:48.567862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:07.011 [2024-11-29 14:33:48.567868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:07.011 [2024-11-29 14:33:48.567877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:07.011 [2024-11-29 14:33:48.567885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:07.011 [2024-11-29 14:33:48.567892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:07.011 [2024-11-29 14:33:48.567899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:07.011 [2024-11-29 14:33:48.567906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:07.011 [2024-11-29 14:33:48.567913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:07.011 [2024-11-29 14:33:48.567920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:07.011 [2024-11-29 14:33:48.567928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:07.011 [2024-11-29 14:33:48.567935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:07.011 [2024-11-29 14:33:48.567942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:07.011 [2024-11-29 14:33:48.567949] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:07.011 [2024-11-29 14:33:48.567960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:07.011 [2024-11-29 14:33:48.567974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:07.011 [2024-11-29 14:33:48.567981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:07.011 [2024-11-29 14:33:48.567988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:07.011 [2024-11-29 14:33:48.567995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:07.011 [2024-11-29 14:33:48.568006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.011 [2024-11-29 14:33:48.568013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:07.011 [2024-11-29 14:33:48.568021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:26:07.011 [2024-11-29 14:33:48.568028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.011 [2024-11-29 14:33:48.585808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.011 [2024-11-29 14:33:48.585852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:07.011 [2024-11-29 14:33:48.585865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.737 ms 00:26:07.011 [2024-11-29 14:33:48.585873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.011 [2024-11-29 14:33:48.585963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.011 [2024-11-29 14:33:48.585976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:07.011 [2024-11-29 14:33:48.585987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:07.011 [2024-11-29 14:33:48.585995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.011 [2024-11-29 14:33:48.595697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.011 [2024-11-29 14:33:48.595733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:07.011 [2024-11-29 14:33:48.595746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.645 ms 00:26:07.011 [2024-11-29 14:33:48.595756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.011 [2024-11-29 14:33:48.595793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.011 [2024-11-29 14:33:48.595804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:07.011 [2024-11-29 14:33:48.595815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:07.011 [2024-11-29 14:33:48.595825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.011 [2024-11-29 14:33:48.596237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.596271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:07.012 [2024-11-29 14:33:48.596284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:26:07.012 [2024-11-29 14:33:48.596293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.596461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.596480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:07.012 [2024-11-29 14:33:48.596522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:26:07.012 [2024-11-29 14:33:48.596534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.601878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.601910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:07.012 [2024-11-29 14:33:48.601920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.298 ms 00:26:07.012 [2024-11-29 14:33:48.601931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.604793] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:07.012 [2024-11-29 14:33:48.604827] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:07.012 [2024-11-29 14:33:48.604838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.604845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:07.012 [2024-11-29 14:33:48.604854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.831 ms 00:26:07.012 [2024-11-29 14:33:48.604860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.619879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.619915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:07.012 [2024-11-29 14:33:48.619928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.979 ms 00:26:07.012 [2024-11-29 14:33:48.619936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.622202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.622232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:07.012 [2024-11-29 14:33:48.622240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.226 ms 00:26:07.012 [2024-11-29 14:33:48.622247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.624292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.624321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:07.012 [2024-11-29 14:33:48.624330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.010 ms 00:26:07.012 [2024-11-29 14:33:48.624336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.624683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.624696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:07.012 [2024-11-29 14:33:48.624705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:26:07.012 [2024-11-29 14:33:48.624712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.642628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.642673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:07.012 [2024-11-29 14:33:48.642684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.896 ms 00:26:07.012 [2024-11-29 14:33:48.642692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.650298] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:07.012 [2024-11-29 14:33:48.652858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.652887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:07.012 [2024-11-29 14:33:48.652906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.128 ms 00:26:07.012 [2024-11-29 14:33:48.652915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.652985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.652996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:07.012 [2024-11-29 14:33:48.653005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:07.012 [2024-11-29 14:33:48.653012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.653664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.653685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:07.012 [2024-11-29 14:33:48.653697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:26:07.012 [2024-11-29 14:33:48.653705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.653732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.653746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:07.012 [2024-11-29 14:33:48.653754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:07.012 [2024-11-29 14:33:48.653761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.653795] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:07.012 [2024-11-29 14:33:48.653805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.653813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:07.012 [2024-11-29 14:33:48.653821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:07.012 [2024-11-29 14:33:48.653831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.658156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.658191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:07.012 [2024-11-29 14:33:48.658203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.307 ms 00:26:07.012 [2024-11-29 14:33:48.658212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.658295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.012 [2024-11-29 14:33:48.658308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:07.012 [2024-11-29 14:33:48.658318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:07.012 [2024-11-29 14:33:48.658325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.012 [2024-11-29 14:33:48.660519] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.095 ms, result 0 00:26:08.396  [2024-11-29T14:33:51.130Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-29T14:33:52.068Z] Copying: 37/1024 [MB] (23 MBps) [2024-11-29T14:33:53.009Z] Copying: 55/1024 [MB] (18 MBps) [2024-11-29T14:33:53.947Z] Copying: 74/1024 [MB] (18 MBps) [2024-11-29T14:33:54.889Z] Copying: 87/1024 [MB] (13 MBps) [2024-11-29T14:33:55.837Z] Copying: 98/1024 [MB] (10 MBps) [2024-11-29T14:33:57.223Z] Copying: 111/1024 [MB] (13 MBps) [2024-11-29T14:33:58.168Z] Copying: 131/1024 [MB] (20 MBps) [2024-11-29T14:33:59.112Z] Copying: 151/1024 [MB] (20 MBps) [2024-11-29T14:34:00.056Z] Copying: 172/1024 [MB] (20 MBps) [2024-11-29T14:34:00.999Z] Copying: 186/1024 [MB] (13 MBps) [2024-11-29T14:34:01.943Z] Copying: 202/1024 [MB] (15 MBps) [2024-11-29T14:34:02.888Z] Copying: 223/1024 [MB] (20 MBps) [2024-11-29T14:34:03.833Z] Copying: 244/1024 [MB] (21 MBps) [2024-11-29T14:34:05.222Z] Copying: 266/1024 [MB] (22 MBps) [2024-11-29T14:34:06.163Z] Copying: 284/1024 [MB] (17 MBps) [2024-11-29T14:34:07.104Z] Copying: 305/1024 [MB] (20 MBps) [2024-11-29T14:34:08.045Z] Copying: 316/1024 [MB] (10 MBps) [2024-11-29T14:34:08.988Z] Copying: 326/1024 [MB] (10 MBps) [2024-11-29T14:34:09.932Z] Copying: 342/1024 [MB] (15 MBps) [2024-11-29T14:34:10.877Z] Copying: 360/1024 [MB] (18 MBps) [2024-11-29T14:34:12.265Z] Copying: 375/1024 [MB] (14 MBps) [2024-11-29T14:34:12.837Z] Copying: 394/1024 [MB] (19 MBps) [2024-11-29T14:34:14.220Z] Copying: 409/1024 [MB] (15 MBps) [2024-11-29T14:34:15.163Z] Copying: 426/1024 [MB] (16 MBps) [2024-11-29T14:34:16.163Z] Copying: 444/1024 [MB] (18 MBps) [2024-11-29T14:34:17.105Z] Copying: 464/1024 [MB] (19 MBps) [2024-11-29T14:34:18.049Z] Copying: 476/1024 [MB] (12 MBps) [2024-11-29T14:34:18.992Z] Copying: 496/1024 [MB] (20 MBps) [2024-11-29T14:34:19.937Z] Copying: 514/1024 [MB] (17 MBps) [2024-11-29T14:34:20.881Z] Copying: 539/1024 [MB] (24 MBps) [2024-11-29T14:34:21.841Z] Copying: 556/1024 [MB] (17 MBps) [2024-11-29T14:34:23.226Z] Copying: 567/1024 [MB] (10 MBps) [2024-11-29T14:34:24.172Z] Copying: 584/1024 [MB] (16 MBps) [2024-11-29T14:34:25.116Z] Copying: 606/1024 [MB] (21 MBps) [2024-11-29T14:34:26.067Z] Copying: 622/1024 [MB] (16 MBps) [2024-11-29T14:34:27.011Z] Copying: 646/1024 [MB] (24 MBps) [2024-11-29T14:34:27.957Z] Copying: 668/1024 [MB] (21 MBps) [2024-11-29T14:34:28.902Z] Copying: 688/1024 [MB] (20 MBps) [2024-11-29T14:34:29.846Z] Copying: 708/1024 [MB] (19 MBps) [2024-11-29T14:34:31.233Z] Copying: 727/1024 [MB] (19 MBps) [2024-11-29T14:34:32.173Z] Copying: 743/1024 [MB] (16 MBps) [2024-11-29T14:34:33.117Z] Copying: 764/1024 [MB] (20 MBps) [2024-11-29T14:34:34.062Z] Copying: 782/1024 [MB] (18 MBps) [2024-11-29T14:34:35.006Z] Copying: 804/1024 [MB] (21 MBps) [2024-11-29T14:34:35.969Z] Copying: 825/1024 [MB] (21 MBps) [2024-11-29T14:34:36.912Z] Copying: 841/1024 [MB] (16 MBps) [2024-11-29T14:34:37.856Z] Copying: 867/1024 [MB] (25 MBps) [2024-11-29T14:34:39.244Z] Copying: 884/1024 [MB] (17 MBps) [2024-11-29T14:34:40.187Z] Copying: 903/1024 [MB] (19 MBps) [2024-11-29T14:34:41.132Z] Copying: 922/1024 [MB] (18 MBps) [2024-11-29T14:34:42.078Z] Copying: 940/1024 [MB] (18 MBps) [2024-11-29T14:34:43.023Z] Copying: 952/1024 [MB] (12 MBps) [2024-11-29T14:34:43.968Z] Copying: 976/1024 [MB] (23 MBps) [2024-11-29T14:34:44.913Z] Copying: 988/1024 [MB] (12 MBps) [2024-11-29T14:34:45.856Z] Copying: 999/1024 [MB] (10 MBps) [2024-11-29T14:34:46.884Z] Copying: 1009/1024 [MB] (10 MBps) [2024-11-29T14:34:47.219Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-29 14:34:46.919270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.425 [2024-11-29 14:34:46.919560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:05.425 [2024-11-29 14:34:46.919598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:05.425 [2024-11-29 14:34:46.919614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.425 [2024-11-29 14:34:46.919654] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:05.425 [2024-11-29 14:34:46.920536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.425 [2024-11-29 14:34:46.920563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:05.425 [2024-11-29 14:34:46.920575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:27:05.425 [2024-11-29 14:34:46.920592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.425 [2024-11-29 14:34:46.920931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.425 [2024-11-29 14:34:46.920943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:05.425 [2024-11-29 14:34:46.920952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:27:05.425 [2024-11-29 14:34:46.920960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.425 [2024-11-29 14:34:46.924559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.425 [2024-11-29 14:34:46.924588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:05.425 [2024-11-29 14:34:46.924599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.578 ms 00:27:05.425 [2024-11-29 14:34:46.924607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.425 [2024-11-29 14:34:46.930867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.425 [2024-11-29 14:34:46.930898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:05.425 [2024-11-29 14:34:46.930910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.240 ms 00:27:05.425 [2024-11-29 14:34:46.930918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.425 [2024-11-29 14:34:46.934806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.425 [2024-11-29 14:34:46.934875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:05.425 [2024-11-29 14:34:46.934894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.806 ms 00:27:05.425 [2024-11-29 14:34:46.934911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.425 [2024-11-29 14:34:46.940249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.425 [2024-11-29 14:34:46.940296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:05.425 [2024-11-29 14:34:46.940307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.273 ms 00:27:05.425 [2024-11-29 14:34:46.940316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.425 [2024-11-29 14:34:46.945081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.425 [2024-11-29 14:34:46.945124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:05.425 [2024-11-29 14:34:46.945134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.716 ms 00:27:05.425 [2024-11-29 14:34:46.945142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.425 [2024-11-29 14:34:46.948617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.425 [2024-11-29 14:34:46.948677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:05.425 [2024-11-29 14:34:46.948688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.448 ms 00:27:05.425 [2024-11-29 14:34:46.948695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.425 [2024-11-29 14:34:46.951704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.425 [2024-11-29 14:34:46.951745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:05.425 [2024-11-29 14:34:46.951755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.951 ms 00:27:05.425 [2024-11-29 14:34:46.951765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.425 [2024-11-29 14:34:46.954023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.425 [2024-11-29 14:34:46.954064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:05.425 [2024-11-29 14:34:46.954074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.214 ms 00:27:05.425 [2024-11-29 14:34:46.954081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.425 [2024-11-29 14:34:46.956382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.425 [2024-11-29 14:34:46.956422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:05.425 [2024-11-29 14:34:46.956432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:27:05.425 [2024-11-29 14:34:46.956439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.425 [2024-11-29 14:34:46.956477] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:05.425 [2024-11-29 14:34:46.956526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:05.425 [2024-11-29 14:34:46.956538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:05.425 [2024-11-29 14:34:46.956547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:05.425 [2024-11-29 14:34:46.956668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.956995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:05.426 [2024-11-29 14:34:46.957361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:05.427 [2024-11-29 14:34:46.957369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:05.427 [2024-11-29 14:34:46.957378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:05.427 [2024-11-29 14:34:46.957385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:05.427 [2024-11-29 14:34:46.957392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:05.427 [2024-11-29 14:34:46.957399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:05.427 [2024-11-29 14:34:46.957407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:05.427 [2024-11-29 14:34:46.957414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:05.427 [2024-11-29 14:34:46.957423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:05.427 [2024-11-29 14:34:46.957431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:05.427 [2024-11-29 14:34:46.957439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:05.427 [2024-11-29 14:34:46.957446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:05.427 [2024-11-29 14:34:46.957453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:05.427 [2024-11-29 14:34:46.957463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:05.427 [2024-11-29 14:34:46.957479] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:05.427 [2024-11-29 14:34:46.957498] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ee36d1dd-8366-42ba-9e96-84f84953a1f1 00:27:05.427 [2024-11-29 14:34:46.957512] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:05.427 [2024-11-29 14:34:46.957520] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:05.427 [2024-11-29 14:34:46.957528] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:05.427 [2024-11-29 14:34:46.957537] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:05.427 [2024-11-29 14:34:46.957545] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:05.427 [2024-11-29 14:34:46.957554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:05.427 [2024-11-29 14:34:46.957568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:05.427 [2024-11-29 14:34:46.957575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:05.427 [2024-11-29 14:34:46.957582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:05.427 [2024-11-29 14:34:46.957590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.427 [2024-11-29 14:34:46.957605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:05.427 [2024-11-29 14:34:46.957613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:27:05.427 [2024-11-29 14:34:46.957621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.427 [2024-11-29 14:34:46.960062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.427 [2024-11-29 14:34:46.960101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:05.427 [2024-11-29 14:34:46.960114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.415 ms 00:27:05.427 [2024-11-29 14:34:46.960122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.427 [2024-11-29 14:34:46.960255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.427 [2024-11-29 14:34:46.960265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:05.427 [2024-11-29 14:34:46.960274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:27:05.427 [2024-11-29 14:34:46.960282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.427 [2024-11-29 14:34:46.967006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.427 [2024-11-29 14:34:46.967057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:05.427 [2024-11-29 14:34:46.967093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.427 [2024-11-29 14:34:46.967101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.427 [2024-11-29 14:34:46.967165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.427 [2024-11-29 14:34:46.967175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:05.427 [2024-11-29 14:34:46.967184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.427 [2024-11-29 14:34:46.967191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.427 [2024-11-29 14:34:46.967261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.427 [2024-11-29 14:34:46.967272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:05.427 [2024-11-29 14:34:46.967286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.427 [2024-11-29 14:34:46.967297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.427 [2024-11-29 14:34:46.967316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.427 [2024-11-29 14:34:46.967324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:05.427 [2024-11-29 14:34:46.967331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.427 [2024-11-29 14:34:46.967339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.427 [2024-11-29 14:34:46.980840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.427 [2024-11-29 14:34:46.980891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:05.427 [2024-11-29 14:34:46.980903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.427 [2024-11-29 14:34:46.980912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.427 [2024-11-29 14:34:46.990901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.427 [2024-11-29 14:34:46.990952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:05.427 [2024-11-29 14:34:46.990963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.427 [2024-11-29 14:34:46.990971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.427 [2024-11-29 14:34:46.991021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.427 [2024-11-29 14:34:46.991030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:05.427 [2024-11-29 14:34:46.991039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.427 [2024-11-29 14:34:46.991047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.427 [2024-11-29 14:34:46.991108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.427 [2024-11-29 14:34:46.991127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:05.427 [2024-11-29 14:34:46.991137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.427 [2024-11-29 14:34:46.991145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.427 [2024-11-29 14:34:46.991213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.427 [2024-11-29 14:34:46.991223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:05.427 [2024-11-29 14:34:46.991232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.427 [2024-11-29 14:34:46.991240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.427 [2024-11-29 14:34:46.991272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.427 [2024-11-29 14:34:46.991282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:05.427 [2024-11-29 14:34:46.991294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.427 [2024-11-29 14:34:46.991301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.427 [2024-11-29 14:34:46.991339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.427 [2024-11-29 14:34:46.991354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:05.427 [2024-11-29 14:34:46.991362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.427 [2024-11-29 14:34:46.991370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.427 [2024-11-29 14:34:46.991417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.427 [2024-11-29 14:34:46.991430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:05.427 [2024-11-29 14:34:46.991439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.428 [2024-11-29 14:34:46.991447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.428 [2024-11-29 14:34:46.991635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.335 ms, result 0 00:27:05.428 00:27:05.428 00:27:05.428 14:34:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:07.975 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:07.975 14:34:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:07.975 14:34:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:07.975 14:34:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:07.975 14:34:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:07.975 14:34:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:07.975 14:34:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:07.975 14:34:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:07.975 Process with pid 89359 is not found 00:27:07.975 14:34:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89359 00:27:07.975 14:34:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89359 ']' 00:27:07.975 14:34:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 89359 00:27:07.975 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89359) - No such process 00:27:07.975 14:34:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 89359 is not found' 00:27:07.975 14:34:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:08.236 Remove shared memory files 00:27:08.236 14:34:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:08.236 14:34:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:08.236 14:34:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:08.236 14:34:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:08.236 14:34:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:08.236 14:34:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:08.236 14:34:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:08.236 00:27:08.236 real 4m21.977s 00:27:08.236 user 4m34.639s 00:27:08.236 sys 0m23.286s 00:27:08.236 ************************************ 00:27:08.236 END TEST ftl_dirty_shutdown 00:27:08.236 ************************************ 00:27:08.236 14:34:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:08.236 14:34:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:08.497 14:34:50 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:08.498 14:34:50 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:08.498 14:34:50 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:08.498 14:34:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:08.498 ************************************ 00:27:08.498 START TEST ftl_upgrade_shutdown 00:27:08.498 ************************************ 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:08.498 * Looking for test storage... 00:27:08.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:08.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.498 --rc genhtml_branch_coverage=1 00:27:08.498 --rc genhtml_function_coverage=1 00:27:08.498 --rc genhtml_legend=1 00:27:08.498 --rc geninfo_all_blocks=1 00:27:08.498 --rc geninfo_unexecuted_blocks=1 00:27:08.498 00:27:08.498 ' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:08.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.498 --rc genhtml_branch_coverage=1 00:27:08.498 --rc genhtml_function_coverage=1 00:27:08.498 --rc genhtml_legend=1 00:27:08.498 --rc geninfo_all_blocks=1 00:27:08.498 --rc geninfo_unexecuted_blocks=1 00:27:08.498 00:27:08.498 ' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:08.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.498 --rc genhtml_branch_coverage=1 00:27:08.498 --rc genhtml_function_coverage=1 00:27:08.498 --rc genhtml_legend=1 00:27:08.498 --rc geninfo_all_blocks=1 00:27:08.498 --rc geninfo_unexecuted_blocks=1 00:27:08.498 00:27:08.498 ' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:08.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.498 --rc genhtml_branch_coverage=1 00:27:08.498 --rc genhtml_function_coverage=1 00:27:08.498 --rc genhtml_legend=1 00:27:08.498 --rc geninfo_all_blocks=1 00:27:08.498 --rc geninfo_unexecuted_blocks=1 00:27:08.498 00:27:08.498 ' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92192 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92192 00:27:08.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92192 ']' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:08.498 14:34:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:08.499 14:34:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:08.760 [2024-11-29 14:34:50.301353] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:08.760 [2024-11-29 14:34:50.301520] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92192 ] 00:27:08.760 [2024-11-29 14:34:50.445527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.760 [2024-11-29 14:34:50.497627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:09.705 14:34:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:09.966 14:34:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:09.966 { 00:27:09.966 "name": "basen1", 00:27:09.966 "aliases": [ 00:27:09.966 "e4ec4f4a-d0bf-40b9-9027-497b556e13c8" 00:27:09.966 ], 00:27:09.966 "product_name": "NVMe disk", 00:27:09.966 "block_size": 4096, 00:27:09.966 "num_blocks": 1310720, 00:27:09.966 "uuid": "e4ec4f4a-d0bf-40b9-9027-497b556e13c8", 00:27:09.966 "numa_id": -1, 00:27:09.966 "assigned_rate_limits": { 00:27:09.966 "rw_ios_per_sec": 0, 00:27:09.966 "rw_mbytes_per_sec": 0, 00:27:09.966 "r_mbytes_per_sec": 0, 00:27:09.966 "w_mbytes_per_sec": 0 00:27:09.966 }, 00:27:09.966 "claimed": true, 00:27:09.966 "claim_type": "read_many_write_one", 00:27:09.966 "zoned": false, 00:27:09.966 "supported_io_types": { 00:27:09.966 "read": true, 00:27:09.966 "write": true, 00:27:09.966 "unmap": true, 00:27:09.966 "flush": true, 00:27:09.966 "reset": true, 00:27:09.966 "nvme_admin": true, 00:27:09.966 "nvme_io": true, 00:27:09.966 "nvme_io_md": false, 00:27:09.966 "write_zeroes": true, 00:27:09.966 "zcopy": false, 00:27:09.966 "get_zone_info": false, 00:27:09.966 "zone_management": false, 00:27:09.966 "zone_append": false, 00:27:09.966 "compare": true, 00:27:09.966 "compare_and_write": false, 00:27:09.966 "abort": true, 00:27:09.966 "seek_hole": false, 00:27:09.966 "seek_data": false, 00:27:09.966 "copy": true, 00:27:09.966 "nvme_iov_md": false 00:27:09.966 }, 00:27:09.966 "driver_specific": { 00:27:09.966 "nvme": [ 00:27:09.966 { 00:27:09.966 "pci_address": "0000:00:11.0", 00:27:09.966 "trid": { 00:27:09.966 "trtype": "PCIe", 00:27:09.966 "traddr": "0000:00:11.0" 00:27:09.966 }, 00:27:09.966 "ctrlr_data": { 00:27:09.966 "cntlid": 0, 00:27:09.966 "vendor_id": "0x1b36", 00:27:09.966 "model_number": "QEMU NVMe Ctrl", 00:27:09.966 "serial_number": "12341", 00:27:09.966 "firmware_revision": "8.0.0", 00:27:09.966 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:09.966 "oacs": { 00:27:09.966 "security": 0, 00:27:09.966 "format": 1, 00:27:09.966 "firmware": 0, 00:27:09.966 "ns_manage": 1 00:27:09.966 }, 00:27:09.966 "multi_ctrlr": false, 00:27:09.966 "ana_reporting": false 00:27:09.966 }, 00:27:09.966 "vs": { 00:27:09.966 "nvme_version": "1.4" 00:27:09.966 }, 00:27:09.966 "ns_data": { 00:27:09.966 "id": 1, 00:27:09.966 "can_share": false 00:27:09.966 } 00:27:09.966 } 00:27:09.966 ], 00:27:09.966 "mp_policy": "active_passive" 00:27:09.966 } 00:27:09.966 } 00:27:09.966 ]' 00:27:09.966 14:34:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:09.966 14:34:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:09.966 14:34:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:09.966 14:34:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:09.966 14:34:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:09.966 14:34:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:09.966 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:09.966 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:09.966 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:09.966 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:09.966 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:10.227 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=afdaff27-a6c1-4749-90f8-7c741451b014 00:27:10.227 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:10.227 14:34:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u afdaff27-a6c1-4749-90f8-7c741451b014 00:27:10.486 14:34:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:10.746 14:34:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=a186f17b-624a-4efb-9e8c-6d6c984535fc 00:27:10.746 14:34:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u a186f17b-624a-4efb-9e8c-6d6c984535fc 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=afe412c4-6995-442d-8880-bf471e01bf2c 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z afe412c4-6995-442d-8880-bf471e01bf2c ]] 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 afe412c4-6995-442d-8880-bf471e01bf2c 5120 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=afe412c4-6995-442d-8880-bf471e01bf2c 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size afe412c4-6995-442d-8880-bf471e01bf2c 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=afe412c4-6995-442d-8880-bf471e01bf2c 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b afe412c4-6995-442d-8880-bf471e01bf2c 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:11.008 { 00:27:11.008 "name": "afe412c4-6995-442d-8880-bf471e01bf2c", 00:27:11.008 "aliases": [ 00:27:11.008 "lvs/basen1p0" 00:27:11.008 ], 00:27:11.008 "product_name": "Logical Volume", 00:27:11.008 "block_size": 4096, 00:27:11.008 "num_blocks": 5242880, 00:27:11.008 "uuid": "afe412c4-6995-442d-8880-bf471e01bf2c", 00:27:11.008 "assigned_rate_limits": { 00:27:11.008 "rw_ios_per_sec": 0, 00:27:11.008 "rw_mbytes_per_sec": 0, 00:27:11.008 "r_mbytes_per_sec": 0, 00:27:11.008 "w_mbytes_per_sec": 0 00:27:11.008 }, 00:27:11.008 "claimed": false, 00:27:11.008 "zoned": false, 00:27:11.008 "supported_io_types": { 00:27:11.008 "read": true, 00:27:11.008 "write": true, 00:27:11.008 "unmap": true, 00:27:11.008 "flush": false, 00:27:11.008 "reset": true, 00:27:11.008 "nvme_admin": false, 00:27:11.008 "nvme_io": false, 00:27:11.008 "nvme_io_md": false, 00:27:11.008 "write_zeroes": true, 00:27:11.008 "zcopy": false, 00:27:11.008 "get_zone_info": false, 00:27:11.008 "zone_management": false, 00:27:11.008 "zone_append": false, 00:27:11.008 "compare": false, 00:27:11.008 "compare_and_write": false, 00:27:11.008 "abort": false, 00:27:11.008 "seek_hole": true, 00:27:11.008 "seek_data": true, 00:27:11.008 "copy": false, 00:27:11.008 "nvme_iov_md": false 00:27:11.008 }, 00:27:11.008 "driver_specific": { 00:27:11.008 "lvol": { 00:27:11.008 "lvol_store_uuid": "a186f17b-624a-4efb-9e8c-6d6c984535fc", 00:27:11.008 "base_bdev": "basen1", 00:27:11.008 "thin_provision": true, 00:27:11.008 "num_allocated_clusters": 0, 00:27:11.008 "snapshot": false, 00:27:11.008 "clone": false, 00:27:11.008 "esnap_clone": false 00:27:11.008 } 00:27:11.008 } 00:27:11.008 } 00:27:11.008 ]' 00:27:11.008 14:34:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:11.270 14:34:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:11.270 14:34:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:11.270 14:34:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:27:11.270 14:34:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:27:11.270 14:34:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:27:11.270 14:34:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:11.270 14:34:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:11.270 14:34:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:11.531 14:34:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:11.531 14:34:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:11.531 14:34:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:11.531 14:34:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:11.531 14:34:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:11.531 14:34:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d afe412c4-6995-442d-8880-bf471e01bf2c -c cachen1p0 --l2p_dram_limit 2 00:27:11.793 [2024-11-29 14:34:53.504555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.793 [2024-11-29 14:34:53.504614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:11.793 [2024-11-29 14:34:53.504634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:11.793 [2024-11-29 14:34:53.504645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.793 [2024-11-29 14:34:53.504712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.793 [2024-11-29 14:34:53.504728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:11.793 [2024-11-29 14:34:53.504738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:27:11.793 [2024-11-29 14:34:53.504750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.794 [2024-11-29 14:34:53.504778] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:11.794 [2024-11-29 14:34:53.505075] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:11.794 [2024-11-29 14:34:53.505093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.794 [2024-11-29 14:34:53.505104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:11.794 [2024-11-29 14:34:53.505114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.323 ms 00:27:11.794 [2024-11-29 14:34:53.505125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.794 [2024-11-29 14:34:53.505158] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 8caeb5f4-05e9-41b1-91be-745cb7348607 00:27:11.794 [2024-11-29 14:34:53.507312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.794 [2024-11-29 14:34:53.507368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:11.794 [2024-11-29 14:34:53.507384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:27:11.794 [2024-11-29 14:34:53.507393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.794 [2024-11-29 14:34:53.516109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.794 [2024-11-29 14:34:53.516152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:11.794 [2024-11-29 14:34:53.516166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.617 ms 00:27:11.794 [2024-11-29 14:34:53.516174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.794 [2024-11-29 14:34:53.516226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.794 [2024-11-29 14:34:53.516237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:11.794 [2024-11-29 14:34:53.516248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:11.794 [2024-11-29 14:34:53.516260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.794 [2024-11-29 14:34:53.516355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.794 [2024-11-29 14:34:53.516366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:11.794 [2024-11-29 14:34:53.516377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:11.794 [2024-11-29 14:34:53.516385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.794 [2024-11-29 14:34:53.516411] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:11.794 [2024-11-29 14:34:53.518634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.794 [2024-11-29 14:34:53.518671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:11.794 [2024-11-29 14:34:53.518684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.232 ms 00:27:11.794 [2024-11-29 14:34:53.518694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.794 [2024-11-29 14:34:53.518725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.794 [2024-11-29 14:34:53.518736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:11.794 [2024-11-29 14:34:53.518749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:11.794 [2024-11-29 14:34:53.518761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.794 [2024-11-29 14:34:53.518782] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:11.794 [2024-11-29 14:34:53.518929] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:11.794 [2024-11-29 14:34:53.518940] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:11.794 [2024-11-29 14:34:53.518954] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:11.794 [2024-11-29 14:34:53.518972] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:11.794 [2024-11-29 14:34:53.518990] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:11.794 [2024-11-29 14:34:53.518999] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:11.794 [2024-11-29 14:34:53.519015] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:11.794 [2024-11-29 14:34:53.519022] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:11.794 [2024-11-29 14:34:53.519031] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:11.794 [2024-11-29 14:34:53.519059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.794 [2024-11-29 14:34:53.519097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:11.794 [2024-11-29 14:34:53.519106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.278 ms 00:27:11.794 [2024-11-29 14:34:53.519116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.794 [2024-11-29 14:34:53.519204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.794 [2024-11-29 14:34:53.519217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:11.794 [2024-11-29 14:34:53.519226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:27:11.794 [2024-11-29 14:34:53.519236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.794 [2024-11-29 14:34:53.519332] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:11.794 [2024-11-29 14:34:53.519348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:11.794 [2024-11-29 14:34:53.519357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:11.794 [2024-11-29 14:34:53.519371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.794 [2024-11-29 14:34:53.519380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:11.794 [2024-11-29 14:34:53.519389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:11.794 [2024-11-29 14:34:53.519398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:11.794 [2024-11-29 14:34:53.519410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:11.794 [2024-11-29 14:34:53.519418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:11.794 [2024-11-29 14:34:53.519427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.794 [2024-11-29 14:34:53.519435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:11.794 [2024-11-29 14:34:53.519445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:11.794 [2024-11-29 14:34:53.519453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.794 [2024-11-29 14:34:53.519464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:11.794 [2024-11-29 14:34:53.519472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:11.794 [2024-11-29 14:34:53.519481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.794 [2024-11-29 14:34:53.519506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:11.794 [2024-11-29 14:34:53.519516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:11.794 [2024-11-29 14:34:53.519524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.794 [2024-11-29 14:34:53.519536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:11.794 [2024-11-29 14:34:53.519544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:11.794 [2024-11-29 14:34:53.519554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:11.794 [2024-11-29 14:34:53.519565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:11.794 [2024-11-29 14:34:53.519574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:11.794 [2024-11-29 14:34:53.519582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:11.794 [2024-11-29 14:34:53.519592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:11.794 [2024-11-29 14:34:53.519600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:11.794 [2024-11-29 14:34:53.519609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:11.794 [2024-11-29 14:34:53.519618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:11.794 [2024-11-29 14:34:53.519630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:11.794 [2024-11-29 14:34:53.519637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:11.794 [2024-11-29 14:34:53.519646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:11.794 [2024-11-29 14:34:53.519654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:11.794 [2024-11-29 14:34:53.519665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.794 [2024-11-29 14:34:53.519673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:11.794 [2024-11-29 14:34:53.519683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:11.794 [2024-11-29 14:34:53.519690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.794 [2024-11-29 14:34:53.519700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:11.794 [2024-11-29 14:34:53.519708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:11.794 [2024-11-29 14:34:53.519718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.794 [2024-11-29 14:34:53.519726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:11.794 [2024-11-29 14:34:53.519735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:11.794 [2024-11-29 14:34:53.519742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.794 [2024-11-29 14:34:53.519751] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:11.794 [2024-11-29 14:34:53.519758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:11.794 [2024-11-29 14:34:53.519770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:11.794 [2024-11-29 14:34:53.519780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:11.794 [2024-11-29 14:34:53.519790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:11.794 [2024-11-29 14:34:53.519797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:11.794 [2024-11-29 14:34:53.519805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:11.794 [2024-11-29 14:34:53.519812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:11.794 [2024-11-29 14:34:53.519823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:11.794 [2024-11-29 14:34:53.519829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:11.794 [2024-11-29 14:34:53.519842] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:11.794 [2024-11-29 14:34:53.519852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:11.795 [2024-11-29 14:34:53.519868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:11.795 [2024-11-29 14:34:53.519876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:11.795 [2024-11-29 14:34:53.519887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:11.795 [2024-11-29 14:34:53.519894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:11.795 [2024-11-29 14:34:53.519903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:11.795 [2024-11-29 14:34:53.519911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:11.795 [2024-11-29 14:34:53.519923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:11.795 [2024-11-29 14:34:53.519931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:11.795 [2024-11-29 14:34:53.519940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:11.795 [2024-11-29 14:34:53.519947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:11.795 [2024-11-29 14:34:53.519957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:11.795 [2024-11-29 14:34:53.519964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:11.795 [2024-11-29 14:34:53.519973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:11.795 [2024-11-29 14:34:53.519980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:11.795 [2024-11-29 14:34:53.519989] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:11.795 [2024-11-29 14:34:53.519999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:11.795 [2024-11-29 14:34:53.520009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:11.795 [2024-11-29 14:34:53.520016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:11.795 [2024-11-29 14:34:53.520026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:11.795 [2024-11-29 14:34:53.520034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:11.795 [2024-11-29 14:34:53.520043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:11.795 [2024-11-29 14:34:53.520051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:11.795 [2024-11-29 14:34:53.520063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.775 ms 00:27:11.795 [2024-11-29 14:34:53.520070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:11.795 [2024-11-29 14:34:53.520113] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:11.795 [2024-11-29 14:34:53.520122] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:17.088 [2024-11-29 14:34:57.906106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.088 [2024-11-29 14:34:57.906190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:17.088 [2024-11-29 14:34:57.906215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4385.967 ms 00:27:17.088 [2024-11-29 14:34:57.906225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.088 [2024-11-29 14:34:57.920135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.088 [2024-11-29 14:34:57.920184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:17.088 [2024-11-29 14:34:57.920201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.783 ms 00:27:17.088 [2024-11-29 14:34:57.920209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.088 [2024-11-29 14:34:57.920260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.088 [2024-11-29 14:34:57.920269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:17.088 [2024-11-29 14:34:57.920284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:17.088 [2024-11-29 14:34:57.920292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.088 [2024-11-29 14:34:57.931650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.088 [2024-11-29 14:34:57.931692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:17.088 [2024-11-29 14:34:57.931712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.283 ms 00:27:17.088 [2024-11-29 14:34:57.931721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.088 [2024-11-29 14:34:57.931758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.088 [2024-11-29 14:34:57.931770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:17.088 [2024-11-29 14:34:57.931781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:17.089 [2024-11-29 14:34:57.931790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:57.932350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:57.932386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:17.089 [2024-11-29 14:34:57.932404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 00:27:17.089 [2024-11-29 14:34:57.932414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:57.932467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:57.932477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:17.089 [2024-11-29 14:34:57.932511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:17.089 [2024-11-29 14:34:57.932520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:57.951629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:57.951686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:17.089 [2024-11-29 14:34:57.951709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.077 ms 00:27:17.089 [2024-11-29 14:34:57.951722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:57.963835] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:17.089 [2024-11-29 14:34:57.965142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:57.965191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:17.089 [2024-11-29 14:34:57.965203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.280 ms 00:27:17.089 [2024-11-29 14:34:57.965213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:57.987006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:57.987063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:17.089 [2024-11-29 14:34:57.987092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.761 ms 00:27:17.089 [2024-11-29 14:34:57.987106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:57.987217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:57.987231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:17.089 [2024-11-29 14:34:57.987240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:17.089 [2024-11-29 14:34:57.987250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:57.992442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:57.992506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:17.089 [2024-11-29 14:34:57.992518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.171 ms 00:27:17.089 [2024-11-29 14:34:57.992529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:57.997679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:57.997725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:17.089 [2024-11-29 14:34:57.997735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.097 ms 00:27:17.089 [2024-11-29 14:34:57.997745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:57.998070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:57.998082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:17.089 [2024-11-29 14:34:57.998097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.280 ms 00:27:17.089 [2024-11-29 14:34:57.998109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:58.046147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:58.046202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:17.089 [2024-11-29 14:34:58.046215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.016 ms 00:27:17.089 [2024-11-29 14:34:58.046225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:58.053221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:58.053279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:17.089 [2024-11-29 14:34:58.053291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.936 ms 00:27:17.089 [2024-11-29 14:34:58.053302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:58.059332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:58.059559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:17.089 [2024-11-29 14:34:58.059580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.001 ms 00:27:17.089 [2024-11-29 14:34:58.059594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:58.065377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:58.065433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:17.089 [2024-11-29 14:34:58.065444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.752 ms 00:27:17.089 [2024-11-29 14:34:58.065457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:58.065486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:58.065513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:17.089 [2024-11-29 14:34:58.065522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:17.089 [2024-11-29 14:34:58.065531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:58.065601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-11-29 14:34:58.065613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:17.089 [2024-11-29 14:34:58.065621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:17.089 [2024-11-29 14:34:58.065631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-11-29 14:34:58.066724] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4561.702 ms, result 0 00:27:17.089 { 00:27:17.089 "name": "ftl", 00:27:17.089 "uuid": "8caeb5f4-05e9-41b1-91be-745cb7348607" 00:27:17.089 } 00:27:17.089 14:34:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:17.089 [2024-11-29 14:34:58.284996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.089 14:34:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:17.089 14:34:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:17.089 [2024-11-29 14:34:58.701460] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:17.089 14:34:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:17.350 [2024-11-29 14:34:58.917887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:17.350 14:34:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:17.613 Fill FTL, iteration 1 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=92325 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 92325 /var/tmp/spdk.tgt.sock 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92325 ']' 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:17.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:17.613 14:34:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:17.613 [2024-11-29 14:34:59.365975] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:17.613 [2024-11-29 14:34:59.366374] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92325 ] 00:27:17.873 [2024-11-29 14:34:59.517799] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.873 [2024-11-29 14:34:59.567616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.445 14:35:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:18.445 14:35:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:18.445 14:35:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:18.707 ftln1 00:27:18.707 14:35:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:18.707 14:35:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:18.968 14:35:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:18.968 14:35:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 92325 00:27:18.968 14:35:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92325 ']' 00:27:18.968 14:35:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92325 00:27:18.968 14:35:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:18.968 14:35:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:18.968 14:35:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92325 00:27:18.968 killing process with pid 92325 00:27:18.968 14:35:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:18.968 14:35:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:18.968 14:35:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92325' 00:27:18.968 14:35:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92325 00:27:18.968 14:35:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92325 00:27:19.581 14:35:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:19.581 14:35:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:19.581 [2024-11-29 14:35:01.133340] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:19.581 [2024-11-29 14:35:01.133487] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92360 ] 00:27:19.581 [2024-11-29 14:35:01.284021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.842 [2024-11-29 14:35:01.345779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.786  [2024-11-29T14:35:03.963Z] Copying: 173/1024 [MB] (173 MBps) [2024-11-29T14:35:04.906Z] Copying: 386/1024 [MB] (213 MBps) [2024-11-29T14:35:05.846Z] Copying: 634/1024 [MB] (248 MBps) [2024-11-29T14:35:06.416Z] Copying: 886/1024 [MB] (252 MBps) [2024-11-29T14:35:06.416Z] Copying: 1024/1024 [MB] (average 224 MBps) 00:27:24.622 00:27:24.622 Calculate MD5 checksum, iteration 1 00:27:24.622 14:35:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:24.622 14:35:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:24.622 14:35:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:24.622 14:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:24.622 14:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:24.622 14:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:24.622 14:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:24.622 14:35:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:24.622 [2024-11-29 14:35:06.341110] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:24.622 [2024-11-29 14:35:06.341400] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92414 ] 00:27:24.884 [2024-11-29 14:35:06.490628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.884 [2024-11-29 14:35:06.534563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.271  [2024-11-29T14:35:08.636Z] Copying: 565/1024 [MB] (565 MBps) [2024-11-29T14:35:08.895Z] Copying: 1024/1024 [MB] (average 555 MBps) 00:27:27.101 00:27:27.101 14:35:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:27.101 14:35:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:29.001 14:35:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:29.001 Fill FTL, iteration 2 00:27:29.001 14:35:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=559d79a732bbd5f06301cc6b3aa7aa9c 00:27:29.001 14:35:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:29.001 14:35:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:29.001 14:35:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:29.001 14:35:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:29.001 14:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:29.001 14:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:29.001 14:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:29.001 14:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:29.001 14:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:29.001 [2024-11-29 14:35:10.774394] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:29.001 [2024-11-29 14:35:10.774488] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92465 ] 00:27:29.260 [2024-11-29 14:35:10.913427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.260 [2024-11-29 14:35:10.942618] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.644  [2024-11-29T14:35:13.383Z] Copying: 248/1024 [MB] (248 MBps) [2024-11-29T14:35:14.327Z] Copying: 499/1024 [MB] (251 MBps) [2024-11-29T14:35:15.268Z] Copying: 742/1024 [MB] (243 MBps) [2024-11-29T14:35:15.558Z] Copying: 983/1024 [MB] (241 MBps) [2024-11-29T14:35:15.558Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:27:33.764 00:27:33.764 Calculate MD5 checksum, iteration 2 00:27:33.764 14:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:33.764 14:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:33.764 14:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:33.764 14:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:33.764 14:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:33.764 14:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:33.764 14:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:33.764 14:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:33.764 [2024-11-29 14:35:15.509213] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:33.764 [2024-11-29 14:35:15.509326] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92518 ] 00:27:34.024 [2024-11-29 14:35:15.657118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.024 [2024-11-29 14:35:15.691859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.404  [2024-11-29T14:35:17.765Z] Copying: 609/1024 [MB] (609 MBps) [2024-11-29T14:35:18.334Z] Copying: 1024/1024 [MB] (average 613 MBps) 00:27:36.540 00:27:36.540 14:35:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:36.540 14:35:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:38.438 14:35:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:38.438 14:35:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=3db89e31c5dffef94516bb3363376b8e 00:27:38.438 14:35:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:38.438 14:35:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:38.438 14:35:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:38.735 [2024-11-29 14:35:20.359588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.735 [2024-11-29 14:35:20.359641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:38.735 [2024-11-29 14:35:20.359653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:38.735 [2024-11-29 14:35:20.359661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.735 [2024-11-29 14:35:20.359679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.735 [2024-11-29 14:35:20.359686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:38.735 [2024-11-29 14:35:20.359697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:38.735 [2024-11-29 14:35:20.359704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.735 [2024-11-29 14:35:20.359719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.735 [2024-11-29 14:35:20.359726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:38.735 [2024-11-29 14:35:20.359732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:38.735 [2024-11-29 14:35:20.359738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.735 [2024-11-29 14:35:20.359796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.191 ms, result 0 00:27:38.735 true 00:27:38.735 14:35:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:39.023 { 00:27:39.023 "name": "ftl", 00:27:39.023 "properties": [ 00:27:39.023 { 00:27:39.023 "name": "superblock_version", 00:27:39.023 "value": 5, 00:27:39.023 "read-only": true 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "name": "base_device", 00:27:39.023 "bands": [ 00:27:39.023 { 00:27:39.023 "id": 0, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 1, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 2, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 3, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 4, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 5, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 6, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 7, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 8, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 9, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 10, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 11, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 12, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 13, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 14, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 15, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 16, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 17, 00:27:39.023 "state": "FREE", 00:27:39.023 "validity": 0.0 00:27:39.023 } 00:27:39.023 ], 00:27:39.023 "read-only": true 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "name": "cache_device", 00:27:39.023 "type": "bdev", 00:27:39.023 "chunks": [ 00:27:39.023 { 00:27:39.023 "id": 0, 00:27:39.023 "state": "INACTIVE", 00:27:39.023 "utilization": 0.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 1, 00:27:39.023 "state": "CLOSED", 00:27:39.023 "utilization": 1.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 2, 00:27:39.023 "state": "CLOSED", 00:27:39.023 "utilization": 1.0 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 3, 00:27:39.023 "state": "OPEN", 00:27:39.023 "utilization": 0.001953125 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "id": 4, 00:27:39.023 "state": "OPEN", 00:27:39.023 "utilization": 0.0 00:27:39.023 } 00:27:39.023 ], 00:27:39.023 "read-only": true 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "name": "verbose_mode", 00:27:39.023 "value": true, 00:27:39.023 "unit": "", 00:27:39.023 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:39.023 }, 00:27:39.023 { 00:27:39.023 "name": "prep_upgrade_on_shutdown", 00:27:39.023 "value": false, 00:27:39.023 "unit": "", 00:27:39.023 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:39.023 } 00:27:39.023 ] 00:27:39.023 } 00:27:39.023 14:35:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:39.023 [2024-11-29 14:35:20.756952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.023 [2024-11-29 14:35:20.757106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:39.023 [2024-11-29 14:35:20.757158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:39.023 [2024-11-29 14:35:20.757177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.023 [2024-11-29 14:35:20.757209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.023 [2024-11-29 14:35:20.757225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:39.023 [2024-11-29 14:35:20.757240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:39.023 [2024-11-29 14:35:20.757255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.023 [2024-11-29 14:35:20.757280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.023 [2024-11-29 14:35:20.757295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:39.023 [2024-11-29 14:35:20.757311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:39.023 [2024-11-29 14:35:20.757359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.023 [2024-11-29 14:35:20.757416] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.452 ms, result 0 00:27:39.023 true 00:27:39.023 14:35:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:39.023 14:35:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:39.023 14:35:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:39.286 14:35:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:39.286 14:35:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:39.286 14:35:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:39.545 [2024-11-29 14:35:21.161257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.545 [2024-11-29 14:35:21.161287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:39.545 [2024-11-29 14:35:21.161294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:39.545 [2024-11-29 14:35:21.161300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.545 [2024-11-29 14:35:21.161315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.545 [2024-11-29 14:35:21.161321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:39.545 [2024-11-29 14:35:21.161327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:39.545 [2024-11-29 14:35:21.161332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.545 [2024-11-29 14:35:21.161346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.545 [2024-11-29 14:35:21.161353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:39.545 [2024-11-29 14:35:21.161358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:39.545 [2024-11-29 14:35:21.161364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.545 [2024-11-29 14:35:21.161402] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.135 ms, result 0 00:27:39.545 true 00:27:39.545 14:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:39.804 { 00:27:39.804 "name": "ftl", 00:27:39.804 "properties": [ 00:27:39.804 { 00:27:39.804 "name": "superblock_version", 00:27:39.804 "value": 5, 00:27:39.804 "read-only": true 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "name": "base_device", 00:27:39.804 "bands": [ 00:27:39.804 { 00:27:39.804 "id": 0, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "id": 1, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "id": 2, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "id": 3, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "id": 4, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "id": 5, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "id": 6, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "id": 7, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "id": 8, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "id": 9, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "id": 10, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "id": 11, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "id": 12, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "id": 13, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.804 "id": 14, 00:27:39.804 "state": "FREE", 00:27:39.804 "validity": 0.0 00:27:39.804 }, 00:27:39.804 { 00:27:39.805 "id": 15, 00:27:39.805 "state": "FREE", 00:27:39.805 "validity": 0.0 00:27:39.805 }, 00:27:39.805 { 00:27:39.805 "id": 16, 00:27:39.805 "state": "FREE", 00:27:39.805 "validity": 0.0 00:27:39.805 }, 00:27:39.805 { 00:27:39.805 "id": 17, 00:27:39.805 "state": "FREE", 00:27:39.805 "validity": 0.0 00:27:39.805 } 00:27:39.805 ], 00:27:39.805 "read-only": true 00:27:39.805 }, 00:27:39.805 { 00:27:39.805 "name": "cache_device", 00:27:39.805 "type": "bdev", 00:27:39.805 "chunks": [ 00:27:39.805 { 00:27:39.805 "id": 0, 00:27:39.805 "state": "INACTIVE", 00:27:39.805 "utilization": 0.0 00:27:39.805 }, 00:27:39.805 { 00:27:39.805 "id": 1, 00:27:39.805 "state": "CLOSED", 00:27:39.805 "utilization": 1.0 00:27:39.805 }, 00:27:39.805 { 00:27:39.805 "id": 2, 00:27:39.805 "state": "CLOSED", 00:27:39.805 "utilization": 1.0 00:27:39.805 }, 00:27:39.805 { 00:27:39.805 "id": 3, 00:27:39.805 "state": "OPEN", 00:27:39.805 "utilization": 0.001953125 00:27:39.805 }, 00:27:39.805 { 00:27:39.805 "id": 4, 00:27:39.805 "state": "OPEN", 00:27:39.805 "utilization": 0.0 00:27:39.805 } 00:27:39.805 ], 00:27:39.805 "read-only": true 00:27:39.805 }, 00:27:39.805 { 00:27:39.805 "name": "verbose_mode", 00:27:39.805 "value": true, 00:27:39.805 "unit": "", 00:27:39.805 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:39.805 }, 00:27:39.805 { 00:27:39.805 "name": "prep_upgrade_on_shutdown", 00:27:39.805 "value": true, 00:27:39.805 "unit": "", 00:27:39.805 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:39.805 } 00:27:39.805 ] 00:27:39.805 } 00:27:39.805 14:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:39.805 14:35:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92192 ]] 00:27:39.805 14:35:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92192 00:27:39.805 14:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92192 ']' 00:27:39.805 14:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92192 00:27:39.805 14:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:39.805 14:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:39.805 14:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92192 00:27:39.805 killing process with pid 92192 00:27:39.805 14:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:39.805 14:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:39.805 14:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92192' 00:27:39.805 14:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92192 00:27:39.805 14:35:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92192 00:27:39.805 [2024-11-29 14:35:21.525425] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:39.805 [2024-11-29 14:35:21.528858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.805 [2024-11-29 14:35:21.528889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:39.805 [2024-11-29 14:35:21.528900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:39.805 [2024-11-29 14:35:21.528907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.805 [2024-11-29 14:35:21.528925] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:39.805 [2024-11-29 14:35:21.529449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.805 [2024-11-29 14:35:21.529465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:39.805 [2024-11-29 14:35:21.529473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.513 ms 00:27:39.805 [2024-11-29 14:35:21.529485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.957478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.797 [2024-11-29 14:35:29.957677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:49.797 [2024-11-29 14:35:29.957733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8427.926 ms 00:27:49.797 [2024-11-29 14:35:29.957753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.958850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.797 [2024-11-29 14:35:29.958936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:49.797 [2024-11-29 14:35:29.958981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.066 ms 00:27:49.797 [2024-11-29 14:35:29.959000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.959885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.797 [2024-11-29 14:35:29.959957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:49.797 [2024-11-29 14:35:29.960002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.855 ms 00:27:49.797 [2024-11-29 14:35:29.960021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.961871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.797 [2024-11-29 14:35:29.961961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:49.797 [2024-11-29 14:35:29.962006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.805 ms 00:27:49.797 [2024-11-29 14:35:29.962023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.964479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.797 [2024-11-29 14:35:29.964580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:49.797 [2024-11-29 14:35:29.964623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.424 ms 00:27:49.797 [2024-11-29 14:35:29.964642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.964704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.797 [2024-11-29 14:35:29.964722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:49.797 [2024-11-29 14:35:29.964738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:49.797 [2024-11-29 14:35:29.964757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.965966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.797 [2024-11-29 14:35:29.966051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:49.797 [2024-11-29 14:35:29.966090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.187 ms 00:27:49.797 [2024-11-29 14:35:29.966108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.967508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.797 [2024-11-29 14:35:29.967587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:49.797 [2024-11-29 14:35:29.967624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.329 ms 00:27:49.797 [2024-11-29 14:35:29.967640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.969220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.797 [2024-11-29 14:35:29.969302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:49.797 [2024-11-29 14:35:29.969337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.550 ms 00:27:49.797 [2024-11-29 14:35:29.969353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.971024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.797 [2024-11-29 14:35:29.971109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:49.797 [2024-11-29 14:35:29.971146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.618 ms 00:27:49.797 [2024-11-29 14:35:29.971162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.971190] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:49.797 [2024-11-29 14:35:29.971211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:49.797 [2024-11-29 14:35:29.971235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:49.797 [2024-11-29 14:35:29.971258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:49.797 [2024-11-29 14:35:29.971281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.971352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.971398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.971422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.971625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.971650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.971690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.971715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.971851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.971876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.971897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.971938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.971961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.971984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.972021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:49.797 [2024-11-29 14:35:29.972047] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:49.797 [2024-11-29 14:35:29.972063] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 8caeb5f4-05e9-41b1-91be-745cb7348607 00:27:49.797 [2024-11-29 14:35:29.972086] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:49.797 [2024-11-29 14:35:29.972101] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:49.797 [2024-11-29 14:35:29.972115] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:49.797 [2024-11-29 14:35:29.972130] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:49.797 [2024-11-29 14:35:29.972146] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:49.797 [2024-11-29 14:35:29.972160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:49.797 [2024-11-29 14:35:29.972179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:49.797 [2024-11-29 14:35:29.972193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:49.797 [2024-11-29 14:35:29.972318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:49.797 [2024-11-29 14:35:29.972336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.797 [2024-11-29 14:35:29.972351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:49.797 [2024-11-29 14:35:29.972367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.146 ms 00:27:49.797 [2024-11-29 14:35:29.972382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.974127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.797 [2024-11-29 14:35:29.974208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:49.797 [2024-11-29 14:35:29.974255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.721 ms 00:27:49.797 [2024-11-29 14:35:29.974272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.974369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.797 [2024-11-29 14:35:29.974389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:49.797 [2024-11-29 14:35:29.974429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:27:49.797 [2024-11-29 14:35:29.974446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.980434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.797 [2024-11-29 14:35:29.980532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:49.797 [2024-11-29 14:35:29.980573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.797 [2024-11-29 14:35:29.980634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.797 [2024-11-29 14:35:29.980669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.798 [2024-11-29 14:35:29.980762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:49.798 [2024-11-29 14:35:29.980782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.798 [2024-11-29 14:35:29.980803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.798 [2024-11-29 14:35:29.980866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.798 [2024-11-29 14:35:29.980887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:49.798 [2024-11-29 14:35:29.980903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.798 [2024-11-29 14:35:29.980917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.798 [2024-11-29 14:35:29.980942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.798 [2024-11-29 14:35:29.980991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:49.798 [2024-11-29 14:35:29.981009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.798 [2024-11-29 14:35:29.981023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.798 [2024-11-29 14:35:29.991502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.798 [2024-11-29 14:35:29.991609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:49.798 [2024-11-29 14:35:29.991647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.798 [2024-11-29 14:35:29.991670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.798 [2024-11-29 14:35:30.000188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.798 [2024-11-29 14:35:30.000302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:49.798 [2024-11-29 14:35:30.000314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.798 [2024-11-29 14:35:30.000320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.798 [2024-11-29 14:35:30.000383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.798 [2024-11-29 14:35:30.000391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:49.798 [2024-11-29 14:35:30.000398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.798 [2024-11-29 14:35:30.000404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.798 [2024-11-29 14:35:30.000432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.798 [2024-11-29 14:35:30.000444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:49.798 [2024-11-29 14:35:30.000452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.798 [2024-11-29 14:35:30.000459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.798 [2024-11-29 14:35:30.000540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.798 [2024-11-29 14:35:30.000549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:49.798 [2024-11-29 14:35:30.000556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.798 [2024-11-29 14:35:30.000562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.798 [2024-11-29 14:35:30.000587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.798 [2024-11-29 14:35:30.000594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:49.798 [2024-11-29 14:35:30.000603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.798 [2024-11-29 14:35:30.000610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.798 [2024-11-29 14:35:30.000647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.798 [2024-11-29 14:35:30.000655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:49.798 [2024-11-29 14:35:30.000661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.798 [2024-11-29 14:35:30.000668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.798 [2024-11-29 14:35:30.000710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.798 [2024-11-29 14:35:30.000726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:49.798 [2024-11-29 14:35:30.000733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.798 [2024-11-29 14:35:30.000740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.798 [2024-11-29 14:35:30.000850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8471.936 ms, result 0 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92698 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92698 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92698 ']' 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:50.059 14:35:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:50.059 [2024-11-29 14:35:31.851060] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:50.059 [2024-11-29 14:35:31.851474] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92698 ] 00:27:50.320 [2024-11-29 14:35:32.003262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.321 [2024-11-29 14:35:32.052948] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.894 [2024-11-29 14:35:32.383673] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:50.894 [2024-11-29 14:35:32.383759] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:50.894 [2024-11-29 14:35:32.532798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.894 [2024-11-29 14:35:32.532857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:50.894 [2024-11-29 14:35:32.532879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:50.894 [2024-11-29 14:35:32.532888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.894 [2024-11-29 14:35:32.532957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.894 [2024-11-29 14:35:32.532969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:50.894 [2024-11-29 14:35:32.532979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:27:50.894 [2024-11-29 14:35:32.532987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.894 [2024-11-29 14:35:32.533018] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:50.894 [2024-11-29 14:35:32.533288] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:50.894 [2024-11-29 14:35:32.533305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.894 [2024-11-29 14:35:32.533315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:50.894 [2024-11-29 14:35:32.533327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.297 ms 00:27:50.894 [2024-11-29 14:35:32.533336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.894 [2024-11-29 14:35:32.535024] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:50.894 [2024-11-29 14:35:32.539032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.894 [2024-11-29 14:35:32.539091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:50.894 [2024-11-29 14:35:32.539103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.010 ms 00:27:50.894 [2024-11-29 14:35:32.539125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.894 [2024-11-29 14:35:32.539203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.894 [2024-11-29 14:35:32.539215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:50.894 [2024-11-29 14:35:32.539225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:50.894 [2024-11-29 14:35:32.539233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.894 [2024-11-29 14:35:32.547337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.894 [2024-11-29 14:35:32.547382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:50.894 [2024-11-29 14:35:32.547396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.055 ms 00:27:50.894 [2024-11-29 14:35:32.547410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.894 [2024-11-29 14:35:32.547466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.894 [2024-11-29 14:35:32.547476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:50.894 [2024-11-29 14:35:32.547485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:50.894 [2024-11-29 14:35:32.547517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.894 [2024-11-29 14:35:32.547582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.894 [2024-11-29 14:35:32.547593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:50.894 [2024-11-29 14:35:32.547603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:50.894 [2024-11-29 14:35:32.547615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.894 [2024-11-29 14:35:32.547646] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:50.895 [2024-11-29 14:35:32.549757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.895 [2024-11-29 14:35:32.549916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:50.895 [2024-11-29 14:35:32.549934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.119 ms 00:27:50.895 [2024-11-29 14:35:32.549943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.895 [2024-11-29 14:35:32.549975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.895 [2024-11-29 14:35:32.549985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:50.895 [2024-11-29 14:35:32.549995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:50.895 [2024-11-29 14:35:32.550009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.895 [2024-11-29 14:35:32.550036] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:50.895 [2024-11-29 14:35:32.550059] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:50.895 [2024-11-29 14:35:32.550097] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:50.895 [2024-11-29 14:35:32.550114] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:50.895 [2024-11-29 14:35:32.550221] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:50.895 [2024-11-29 14:35:32.550233] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:50.895 [2024-11-29 14:35:32.550250] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:50.895 [2024-11-29 14:35:32.550262] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:50.895 [2024-11-29 14:35:32.550272] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:50.895 [2024-11-29 14:35:32.550281] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:50.895 [2024-11-29 14:35:32.550290] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:50.895 [2024-11-29 14:35:32.550299] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:50.895 [2024-11-29 14:35:32.550307] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:50.895 [2024-11-29 14:35:32.550316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.895 [2024-11-29 14:35:32.550325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:50.895 [2024-11-29 14:35:32.550334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.283 ms 00:27:50.895 [2024-11-29 14:35:32.550347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.895 [2024-11-29 14:35:32.550436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.895 [2024-11-29 14:35:32.550446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:50.895 [2024-11-29 14:35:32.550455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:27:50.895 [2024-11-29 14:35:32.550468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.895 [2024-11-29 14:35:32.550593] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:50.895 [2024-11-29 14:35:32.550606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:50.895 [2024-11-29 14:35:32.550618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:50.895 [2024-11-29 14:35:32.550628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.895 [2024-11-29 14:35:32.550641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:50.895 [2024-11-29 14:35:32.550650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:50.895 [2024-11-29 14:35:32.550658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:50.895 [2024-11-29 14:35:32.550668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:50.895 [2024-11-29 14:35:32.550676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:50.895 [2024-11-29 14:35:32.550685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.895 [2024-11-29 14:35:32.550695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:50.895 [2024-11-29 14:35:32.550704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:50.895 [2024-11-29 14:35:32.550713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.895 [2024-11-29 14:35:32.550722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:50.895 [2024-11-29 14:35:32.550737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:50.895 [2024-11-29 14:35:32.550745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.895 [2024-11-29 14:35:32.550759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:50.895 [2024-11-29 14:35:32.550769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:50.895 [2024-11-29 14:35:32.550778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.895 [2024-11-29 14:35:32.550786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:50.895 [2024-11-29 14:35:32.550795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:50.895 [2024-11-29 14:35:32.550804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:50.895 [2024-11-29 14:35:32.550813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:50.895 [2024-11-29 14:35:32.550821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:50.895 [2024-11-29 14:35:32.550830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:50.895 [2024-11-29 14:35:32.550838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:50.895 [2024-11-29 14:35:32.550847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:50.895 [2024-11-29 14:35:32.550855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:50.895 [2024-11-29 14:35:32.550864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:50.895 [2024-11-29 14:35:32.550872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:50.895 [2024-11-29 14:35:32.550880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:50.895 [2024-11-29 14:35:32.550888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:50.895 [2024-11-29 14:35:32.550898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:50.895 [2024-11-29 14:35:32.550906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.895 [2024-11-29 14:35:32.550913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:50.895 [2024-11-29 14:35:32.550920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:50.895 [2024-11-29 14:35:32.550928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.895 [2024-11-29 14:35:32.550935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:50.895 [2024-11-29 14:35:32.550943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:50.895 [2024-11-29 14:35:32.550951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.895 [2024-11-29 14:35:32.550958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:50.895 [2024-11-29 14:35:32.550967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:50.895 [2024-11-29 14:35:32.550975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.895 [2024-11-29 14:35:32.550984] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:50.895 [2024-11-29 14:35:32.550995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:50.895 [2024-11-29 14:35:32.551004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:50.895 [2024-11-29 14:35:32.551012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.895 [2024-11-29 14:35:32.551020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:50.895 [2024-11-29 14:35:32.551033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:50.895 [2024-11-29 14:35:32.551041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:50.895 [2024-11-29 14:35:32.551050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:50.895 [2024-11-29 14:35:32.551057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:50.895 [2024-11-29 14:35:32.551065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:50.895 [2024-11-29 14:35:32.551075] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:50.895 [2024-11-29 14:35:32.551098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:50.895 [2024-11-29 14:35:32.551112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:50.895 [2024-11-29 14:35:32.551120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:50.895 [2024-11-29 14:35:32.551128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:50.895 [2024-11-29 14:35:32.551136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:50.895 [2024-11-29 14:35:32.551144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:50.895 [2024-11-29 14:35:32.551153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:50.895 [2024-11-29 14:35:32.551161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:50.895 [2024-11-29 14:35:32.551169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:50.895 [2024-11-29 14:35:32.551177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:50.895 [2024-11-29 14:35:32.551187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:50.895 [2024-11-29 14:35:32.551196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:50.895 [2024-11-29 14:35:32.551204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:50.895 [2024-11-29 14:35:32.551213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:50.895 [2024-11-29 14:35:32.551222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:50.895 [2024-11-29 14:35:32.551230] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:50.895 [2024-11-29 14:35:32.551240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:50.896 [2024-11-29 14:35:32.551249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:50.896 [2024-11-29 14:35:32.551258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:50.896 [2024-11-29 14:35:32.551266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:50.896 [2024-11-29 14:35:32.551274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:50.896 [2024-11-29 14:35:32.551283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.896 [2024-11-29 14:35:32.551292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:50.896 [2024-11-29 14:35:32.551301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.761 ms 00:27:50.896 [2024-11-29 14:35:32.551312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.896 [2024-11-29 14:35:32.551358] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:50.896 [2024-11-29 14:35:32.551380] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:55.104 [2024-11-29 14:35:36.735106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.735184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:55.104 [2024-11-29 14:35:36.735203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4183.730 ms 00:27:55.104 [2024-11-29 14:35:36.735213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.748878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.749089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:55.104 [2024-11-29 14:35:36.749112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.536 ms 00:27:55.104 [2024-11-29 14:35:36.749124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.749184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.749194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:55.104 [2024-11-29 14:35:36.749204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:55.104 [2024-11-29 14:35:36.749213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.770181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.770247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:55.104 [2024-11-29 14:35:36.770269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.910 ms 00:27:55.104 [2024-11-29 14:35:36.770280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.770332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.770344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:55.104 [2024-11-29 14:35:36.770355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:55.104 [2024-11-29 14:35:36.770366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.770964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.771004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:55.104 [2024-11-29 14:35:36.771019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.519 ms 00:27:55.104 [2024-11-29 14:35:36.771031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.771124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.771138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:55.104 [2024-11-29 14:35:36.771151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:27:55.104 [2024-11-29 14:35:36.771163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.779560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.779610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:55.104 [2024-11-29 14:35:36.779622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.367 ms 00:27:55.104 [2024-11-29 14:35:36.779633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.783223] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:55.104 [2024-11-29 14:35:36.783273] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:55.104 [2024-11-29 14:35:36.783287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.783296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:55.104 [2024-11-29 14:35:36.783305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.546 ms 00:27:55.104 [2024-11-29 14:35:36.783314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.788077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.788120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:55.104 [2024-11-29 14:35:36.788139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.709 ms 00:27:55.104 [2024-11-29 14:35:36.788148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.790712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.790756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:55.104 [2024-11-29 14:35:36.790766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.509 ms 00:27:55.104 [2024-11-29 14:35:36.790775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.793280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.793324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:55.104 [2024-11-29 14:35:36.793335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.457 ms 00:27:55.104 [2024-11-29 14:35:36.793343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.793731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.793746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:55.104 [2024-11-29 14:35:36.793757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.287 ms 00:27:55.104 [2024-11-29 14:35:36.793766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.814558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.814616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:55.104 [2024-11-29 14:35:36.814635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.772 ms 00:27:55.104 [2024-11-29 14:35:36.814645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.822550] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:55.104 [2024-11-29 14:35:36.823453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.823509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:55.104 [2024-11-29 14:35:36.823522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.757 ms 00:27:55.104 [2024-11-29 14:35:36.823536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.823596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.823607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:55.104 [2024-11-29 14:35:36.823632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:55.104 [2024-11-29 14:35:36.823642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.823707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.823724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:55.104 [2024-11-29 14:35:36.823738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:55.104 [2024-11-29 14:35:36.823746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.823776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.823789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:55.104 [2024-11-29 14:35:36.823802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:55.104 [2024-11-29 14:35:36.823814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.823844] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:55.104 [2024-11-29 14:35:36.823856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.823864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:55.104 [2024-11-29 14:35:36.823873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:55.104 [2024-11-29 14:35:36.823881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.828332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.828382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:55.104 [2024-11-29 14:35:36.828393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.426 ms 00:27:55.104 [2024-11-29 14:35:36.828402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.828487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.104 [2024-11-29 14:35:36.828539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:55.104 [2024-11-29 14:35:36.828553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:55.104 [2024-11-29 14:35:36.828561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.104 [2024-11-29 14:35:36.829684] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4296.420 ms, result 0 00:27:55.104 [2024-11-29 14:35:36.844740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.104 [2024-11-29 14:35:36.860743] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:55.105 [2024-11-29 14:35:36.868901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:55.367 14:35:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:55.367 14:35:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:55.367 14:35:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:55.367 14:35:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:55.367 14:35:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:55.367 [2024-11-29 14:35:37.108955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.367 [2024-11-29 14:35:37.109011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:55.367 [2024-11-29 14:35:37.109027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:55.367 [2024-11-29 14:35:37.109037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.367 [2024-11-29 14:35:37.109060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.367 [2024-11-29 14:35:37.109070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:55.367 [2024-11-29 14:35:37.109080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:55.367 [2024-11-29 14:35:37.109089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.367 [2024-11-29 14:35:37.109118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.367 [2024-11-29 14:35:37.109127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:55.367 [2024-11-29 14:35:37.109137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:55.367 [2024-11-29 14:35:37.109146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.367 [2024-11-29 14:35:37.109207] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.244 ms, result 0 00:27:55.367 true 00:27:55.367 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:55.629 { 00:27:55.629 "name": "ftl", 00:27:55.629 "properties": [ 00:27:55.629 { 00:27:55.629 "name": "superblock_version", 00:27:55.629 "value": 5, 00:27:55.629 "read-only": true 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "name": "base_device", 00:27:55.629 "bands": [ 00:27:55.629 { 00:27:55.629 "id": 0, 00:27:55.629 "state": "CLOSED", 00:27:55.629 "validity": 1.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 1, 00:27:55.629 "state": "CLOSED", 00:27:55.629 "validity": 1.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 2, 00:27:55.629 "state": "CLOSED", 00:27:55.629 "validity": 0.007843137254901933 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 3, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 4, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 5, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 6, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 7, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 8, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 9, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 10, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 11, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 12, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 13, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 14, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 15, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 16, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 17, 00:27:55.629 "state": "FREE", 00:27:55.629 "validity": 0.0 00:27:55.629 } 00:27:55.629 ], 00:27:55.629 "read-only": true 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "name": "cache_device", 00:27:55.629 "type": "bdev", 00:27:55.629 "chunks": [ 00:27:55.629 { 00:27:55.629 "id": 0, 00:27:55.629 "state": "INACTIVE", 00:27:55.629 "utilization": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 1, 00:27:55.629 "state": "OPEN", 00:27:55.629 "utilization": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 2, 00:27:55.629 "state": "OPEN", 00:27:55.629 "utilization": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 3, 00:27:55.629 "state": "FREE", 00:27:55.629 "utilization": 0.0 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "id": 4, 00:27:55.629 "state": "FREE", 00:27:55.629 "utilization": 0.0 00:27:55.629 } 00:27:55.629 ], 00:27:55.629 "read-only": true 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "name": "verbose_mode", 00:27:55.629 "value": true, 00:27:55.629 "unit": "", 00:27:55.629 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:55.629 }, 00:27:55.629 { 00:27:55.629 "name": "prep_upgrade_on_shutdown", 00:27:55.629 "value": false, 00:27:55.629 "unit": "", 00:27:55.629 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:55.629 } 00:27:55.629 ] 00:27:55.629 } 00:27:55.629 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:55.629 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:55.629 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:55.891 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:55.891 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:55.891 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:55.891 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:55.891 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:56.152 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:56.152 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:56.152 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:56.152 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:56.152 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:56.152 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:56.152 Validate MD5 checksum, iteration 1 00:27:56.152 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:56.152 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:56.152 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:56.152 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:56.152 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:56.152 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:56.153 14:35:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:56.153 [2024-11-29 14:35:37.882767] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:56.153 [2024-11-29 14:35:37.883648] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92779 ] 00:27:56.414 [2024-11-29 14:35:38.035597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.414 [2024-11-29 14:35:38.084208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.801  [2024-11-29T14:35:40.542Z] Copying: 539/1024 [MB] (539 MBps) [2024-11-29T14:35:41.115Z] Copying: 1024/1024 [MB] (average 544 MBps) 00:27:59.321 00:27:59.321 14:35:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:59.321 14:35:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:01.865 14:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:01.865 Validate MD5 checksum, iteration 2 00:28:01.865 14:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=559d79a732bbd5f06301cc6b3aa7aa9c 00:28:01.865 14:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 559d79a732bbd5f06301cc6b3aa7aa9c != \5\5\9\d\7\9\a\7\3\2\b\b\d\5\f\0\6\3\0\1\c\c\6\b\3\a\a\7\a\a\9\c ]] 00:28:01.865 14:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:01.865 14:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:01.865 14:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:01.865 14:35:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:01.865 14:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:01.865 14:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:01.865 14:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:01.865 14:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:01.865 14:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:01.865 [2024-11-29 14:35:43.328900] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:01.865 [2024-11-29 14:35:43.329143] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92835 ] 00:28:01.865 [2024-11-29 14:35:43.476188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.865 [2024-11-29 14:35:43.509174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.242  [2024-11-29T14:35:45.975Z] Copying: 622/1024 [MB] (622 MBps) [2024-11-29T14:35:50.166Z] Copying: 1024/1024 [MB] (average 576 MBps) 00:28:08.372 00:28:08.372 14:35:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:08.372 14:35:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3db89e31c5dffef94516bb3363376b8e 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3db89e31c5dffef94516bb3363376b8e != \3\d\b\8\9\e\3\1\c\5\d\f\f\e\f\9\4\5\1\6\b\b\3\3\6\3\3\7\6\b\8\e ]] 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92698 ]] 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92698 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:10.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92924 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92924 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92924 ']' 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:10.275 14:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:10.276 14:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:10.276 [2024-11-29 14:35:51.715474] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:10.276 [2024-11-29 14:35:51.715600] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92924 ] 00:28:10.276 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 92698 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:10.276 [2024-11-29 14:35:51.862281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.276 [2024-11-29 14:35:51.904875] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.534 [2024-11-29 14:35:52.200572] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:10.534 [2024-11-29 14:35:52.200629] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:10.809 [2024-11-29 14:35:52.338855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.809 [2024-11-29 14:35:52.338892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:10.809 [2024-11-29 14:35:52.338904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:10.809 [2024-11-29 14:35:52.338915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.809 [2024-11-29 14:35:52.338965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.809 [2024-11-29 14:35:52.338974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:10.809 [2024-11-29 14:35:52.338981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:10.809 [2024-11-29 14:35:52.338986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.810 [2024-11-29 14:35:52.339006] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:10.810 [2024-11-29 14:35:52.339201] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:10.810 [2024-11-29 14:35:52.339213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.810 [2024-11-29 14:35:52.339219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:10.810 [2024-11-29 14:35:52.339228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.214 ms 00:28:10.810 [2024-11-29 14:35:52.339234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.810 [2024-11-29 14:35:52.339434] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:10.810 [2024-11-29 14:35:52.344269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.810 [2024-11-29 14:35:52.344306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:10.810 [2024-11-29 14:35:52.344316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.836 ms 00:28:10.810 [2024-11-29 14:35:52.344328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.810 [2024-11-29 14:35:52.345315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.810 [2024-11-29 14:35:52.345342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:10.810 [2024-11-29 14:35:52.345351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:10.810 [2024-11-29 14:35:52.345357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.810 [2024-11-29 14:35:52.345619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.810 [2024-11-29 14:35:52.345631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:10.810 [2024-11-29 14:35:52.345638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.226 ms 00:28:10.810 [2024-11-29 14:35:52.345644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.810 [2024-11-29 14:35:52.345676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.810 [2024-11-29 14:35:52.345683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:10.810 [2024-11-29 14:35:52.345695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:10.810 [2024-11-29 14:35:52.345703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.810 [2024-11-29 14:35:52.345731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.810 [2024-11-29 14:35:52.345739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:10.810 [2024-11-29 14:35:52.345745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:10.810 [2024-11-29 14:35:52.345753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.810 [2024-11-29 14:35:52.345772] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:10.810 [2024-11-29 14:35:52.346698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.810 [2024-11-29 14:35:52.346732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:10.810 [2024-11-29 14:35:52.346750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.932 ms 00:28:10.810 [2024-11-29 14:35:52.346764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.810 [2024-11-29 14:35:52.346800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.810 [2024-11-29 14:35:52.346817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:10.810 [2024-11-29 14:35:52.346839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:10.810 [2024-11-29 14:35:52.346853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.810 [2024-11-29 14:35:52.346888] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:10.810 [2024-11-29 14:35:52.346918] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:10.810 [2024-11-29 14:35:52.346964] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:10.810 [2024-11-29 14:35:52.347054] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:10.810 [2024-11-29 14:35:52.347171] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:10.810 [2024-11-29 14:35:52.347203] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:10.810 [2024-11-29 14:35:52.347229] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:10.810 [2024-11-29 14:35:52.347254] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:10.810 [2024-11-29 14:35:52.347278] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:10.810 [2024-11-29 14:35:52.347301] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:10.810 [2024-11-29 14:35:52.347363] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:10.810 [2024-11-29 14:35:52.347386] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:10.810 [2024-11-29 14:35:52.347400] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:10.810 [2024-11-29 14:35:52.347415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.810 [2024-11-29 14:35:52.347430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:10.810 [2024-11-29 14:35:52.347445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:28:10.810 [2024-11-29 14:35:52.347464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.810 [2024-11-29 14:35:52.347556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.810 [2024-11-29 14:35:52.347578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:10.810 [2024-11-29 14:35:52.347593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:10.810 [2024-11-29 14:35:52.347608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.810 [2024-11-29 14:35:52.347740] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:10.810 [2024-11-29 14:35:52.347766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:10.810 [2024-11-29 14:35:52.347783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:10.810 [2024-11-29 14:35:52.347798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:10.810 [2024-11-29 14:35:52.347815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:10.810 [2024-11-29 14:35:52.347829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:10.810 [2024-11-29 14:35:52.347843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:10.810 [2024-11-29 14:35:52.347898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:10.810 [2024-11-29 14:35:52.347915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:10.810 [2024-11-29 14:35:52.347930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:10.810 [2024-11-29 14:35:52.347944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:10.810 [2024-11-29 14:35:52.347958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:10.810 [2024-11-29 14:35:52.347980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:10.810 [2024-11-29 14:35:52.347995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:10.810 [2024-11-29 14:35:52.348035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:10.810 [2024-11-29 14:35:52.348053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:10.810 [2024-11-29 14:35:52.348071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:10.810 [2024-11-29 14:35:52.348230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:10.810 [2024-11-29 14:35:52.348246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:10.810 [2024-11-29 14:35:52.348261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:10.810 [2024-11-29 14:35:52.348275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:10.810 [2024-11-29 14:35:52.348290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:10.810 [2024-11-29 14:35:52.348304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:10.810 [2024-11-29 14:35:52.348318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:10.810 [2024-11-29 14:35:52.348332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:10.810 [2024-11-29 14:35:52.348347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:10.810 [2024-11-29 14:35:52.348381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:10.810 [2024-11-29 14:35:52.348448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:10.810 [2024-11-29 14:35:52.348466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:10.810 [2024-11-29 14:35:52.348530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:10.810 [2024-11-29 14:35:52.348547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:10.810 [2024-11-29 14:35:52.348562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:10.810 [2024-11-29 14:35:52.348580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:10.810 [2024-11-29 14:35:52.348595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:10.810 [2024-11-29 14:35:52.348608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:10.810 [2024-11-29 14:35:52.348649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:10.810 [2024-11-29 14:35:52.348666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:10.810 [2024-11-29 14:35:52.348680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:10.810 [2024-11-29 14:35:52.348694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:10.810 [2024-11-29 14:35:52.348708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:10.810 [2024-11-29 14:35:52.348722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:10.810 [2024-11-29 14:35:52.348736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:10.810 [2024-11-29 14:35:52.348742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:10.810 [2024-11-29 14:35:52.348747] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:10.810 [2024-11-29 14:35:52.348754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:10.810 [2024-11-29 14:35:52.348759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:10.810 [2024-11-29 14:35:52.348765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:10.810 [2024-11-29 14:35:52.348772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:10.811 [2024-11-29 14:35:52.348780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:10.811 [2024-11-29 14:35:52.348785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:10.811 [2024-11-29 14:35:52.348791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:10.811 [2024-11-29 14:35:52.348796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:10.811 [2024-11-29 14:35:52.348801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:10.811 [2024-11-29 14:35:52.348808] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:10.811 [2024-11-29 14:35:52.348816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:10.811 [2024-11-29 14:35:52.348822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:10.811 [2024-11-29 14:35:52.348830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:10.811 [2024-11-29 14:35:52.348836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:10.811 [2024-11-29 14:35:52.348842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:10.811 [2024-11-29 14:35:52.348848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:10.811 [2024-11-29 14:35:52.348853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:10.811 [2024-11-29 14:35:52.348859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:10.811 [2024-11-29 14:35:52.348864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:10.811 [2024-11-29 14:35:52.348870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:10.811 [2024-11-29 14:35:52.348877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:10.811 [2024-11-29 14:35:52.348884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:10.811 [2024-11-29 14:35:52.348889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:10.811 [2024-11-29 14:35:52.348895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:10.811 [2024-11-29 14:35:52.348900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:10.811 [2024-11-29 14:35:52.348906] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:10.811 [2024-11-29 14:35:52.348913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:10.811 [2024-11-29 14:35:52.348921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:10.811 [2024-11-29 14:35:52.348928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:10.811 [2024-11-29 14:35:52.348933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:10.811 [2024-11-29 14:35:52.348939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:10.811 [2024-11-29 14:35:52.348945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.348951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:10.811 [2024-11-29 14:35:52.348957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.262 ms 00:28:10.811 [2024-11-29 14:35:52.348964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.357319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.357346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:10.811 [2024-11-29 14:35:52.357354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.312 ms 00:28:10.811 [2024-11-29 14:35:52.357360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.357389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.357396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:10.811 [2024-11-29 14:35:52.357410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:10.811 [2024-11-29 14:35:52.357417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.375996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.376037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:10.811 [2024-11-29 14:35:52.376050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.533 ms 00:28:10.811 [2024-11-29 14:35:52.376058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.376093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.376102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:10.811 [2024-11-29 14:35:52.376110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:10.811 [2024-11-29 14:35:52.376120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.376231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.376242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:10.811 [2024-11-29 14:35:52.376251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:28:10.811 [2024-11-29 14:35:52.376261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.376306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.376315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:10.811 [2024-11-29 14:35:52.376323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:10.811 [2024-11-29 14:35:52.376330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.383344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.383514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:10.811 [2024-11-29 14:35:52.383530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.990 ms 00:28:10.811 [2024-11-29 14:35:52.383538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.383634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.383650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:10.811 [2024-11-29 14:35:52.383660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:10.811 [2024-11-29 14:35:52.383668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.388948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.388984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:10.811 [2024-11-29 14:35:52.389001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.257 ms 00:28:10.811 [2024-11-29 14:35:52.389010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.390407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.390439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:10.811 [2024-11-29 14:35:52.390449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.286 ms 00:28:10.811 [2024-11-29 14:35:52.390458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.408073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.408109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:10.811 [2024-11-29 14:35:52.408124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.585 ms 00:28:10.811 [2024-11-29 14:35:52.408132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.408248] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:10.811 [2024-11-29 14:35:52.408339] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:10.811 [2024-11-29 14:35:52.408424] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:10.811 [2024-11-29 14:35:52.408525] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:10.811 [2024-11-29 14:35:52.408533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.408540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:10.811 [2024-11-29 14:35:52.408547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.365 ms 00:28:10.811 [2024-11-29 14:35:52.408553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.408588] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:10.811 [2024-11-29 14:35:52.408596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.408603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:10.811 [2024-11-29 14:35:52.408610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:10.811 [2024-11-29 14:35:52.408620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.411813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.411846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:10.811 [2024-11-29 14:35:52.411854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.176 ms 00:28:10.811 [2024-11-29 14:35:52.411861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.412418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.811 [2024-11-29 14:35:52.412440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:10.811 [2024-11-29 14:35:52.412448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:10.811 [2024-11-29 14:35:52.412455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.811 [2024-11-29 14:35:52.412536] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:10.811 [2024-11-29 14:35:52.412701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.812 [2024-11-29 14:35:52.412711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:10.812 [2024-11-29 14:35:52.412719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.166 ms 00:28:10.812 [2024-11-29 14:35:52.412724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.385 [2024-11-29 14:35:53.092798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.385 [2024-11-29 14:35:53.093167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:11.385 [2024-11-29 14:35:53.093199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 679.820 ms 00:28:11.385 [2024-11-29 14:35:53.093212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.385 [2024-11-29 14:35:53.095826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.385 [2024-11-29 14:35:53.095878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:11.385 [2024-11-29 14:35:53.095892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.853 ms 00:28:11.385 [2024-11-29 14:35:53.095911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.385 [2024-11-29 14:35:53.097044] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:11.385 [2024-11-29 14:35:53.097109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.385 [2024-11-29 14:35:53.097120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:11.385 [2024-11-29 14:35:53.097137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.159 ms 00:28:11.385 [2024-11-29 14:35:53.097146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.385 [2024-11-29 14:35:53.097191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.385 [2024-11-29 14:35:53.097202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:11.385 [2024-11-29 14:35:53.097213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:11.385 [2024-11-29 14:35:53.097230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.385 [2024-11-29 14:35:53.097278] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 684.728 ms, result 0 00:28:11.385 [2024-11-29 14:35:53.097331] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:11.385 [2024-11-29 14:35:53.097678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.385 [2024-11-29 14:35:53.097694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:11.385 [2024-11-29 14:35:53.097704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.348 ms 00:28:11.385 [2024-11-29 14:35:53.097712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.953 [2024-11-29 14:35:53.714345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.953 [2024-11-29 14:35:53.714411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:11.953 [2024-11-29 14:35:53.714424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 615.943 ms 00:28:11.953 [2024-11-29 14:35:53.714430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.953 [2024-11-29 14:35:53.716029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.953 [2024-11-29 14:35:53.716058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:11.953 [2024-11-29 14:35:53.716067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.190 ms 00:28:11.953 [2024-11-29 14:35:53.716073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.953 [2024-11-29 14:35:53.716452] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:11.953 [2024-11-29 14:35:53.716471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.953 [2024-11-29 14:35:53.716478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:11.953 [2024-11-29 14:35:53.716486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.377 ms 00:28:11.953 [2024-11-29 14:35:53.716505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.953 [2024-11-29 14:35:53.716530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.953 [2024-11-29 14:35:53.716538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:11.953 [2024-11-29 14:35:53.716545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:11.953 [2024-11-29 14:35:53.716550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.953 [2024-11-29 14:35:53.716585] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 619.257 ms, result 0 00:28:11.953 [2024-11-29 14:35:53.716624] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:11.953 [2024-11-29 14:35:53.716632] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:11.953 [2024-11-29 14:35:53.716639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.953 [2024-11-29 14:35:53.716653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:11.953 [2024-11-29 14:35:53.716661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1304.118 ms 00:28:11.953 [2024-11-29 14:35:53.716667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.953 [2024-11-29 14:35:53.716744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.953 [2024-11-29 14:35:53.716752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:11.953 [2024-11-29 14:35:53.716759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:11.953 [2024-11-29 14:35:53.716765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.953 [2024-11-29 14:35:53.723684] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:11.953 [2024-11-29 14:35:53.723777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.953 [2024-11-29 14:35:53.723786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:11.953 [2024-11-29 14:35:53.723793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.000 ms 00:28:11.953 [2024-11-29 14:35:53.723807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.953 [2024-11-29 14:35:53.724334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.953 [2024-11-29 14:35:53.724351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:11.953 [2024-11-29 14:35:53.724359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.475 ms 00:28:11.953 [2024-11-29 14:35:53.724365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.953 [2024-11-29 14:35:53.726085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.953 [2024-11-29 14:35:53.726240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:11.953 [2024-11-29 14:35:53.726259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.705 ms 00:28:11.953 [2024-11-29 14:35:53.726265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.953 [2024-11-29 14:35:53.726302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.954 [2024-11-29 14:35:53.726309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:11.954 [2024-11-29 14:35:53.726316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:11.954 [2024-11-29 14:35:53.726322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.954 [2024-11-29 14:35:53.726411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.954 [2024-11-29 14:35:53.726420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:11.954 [2024-11-29 14:35:53.726427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:11.954 [2024-11-29 14:35:53.726436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.954 [2024-11-29 14:35:53.726454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.954 [2024-11-29 14:35:53.726460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:11.954 [2024-11-29 14:35:53.726466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:11.954 [2024-11-29 14:35:53.726472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.954 [2024-11-29 14:35:53.726512] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:11.954 [2024-11-29 14:35:53.726520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.954 [2024-11-29 14:35:53.726528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:11.954 [2024-11-29 14:35:53.726538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:11.954 [2024-11-29 14:35:53.726547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.954 [2024-11-29 14:35:53.726596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.954 [2024-11-29 14:35:53.726604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:11.954 [2024-11-29 14:35:53.726610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:11.954 [2024-11-29 14:35:53.726616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.954 [2024-11-29 14:35:53.727555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1388.296 ms, result 0 00:28:11.954 [2024-11-29 14:35:53.740044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.213 [2024-11-29 14:35:53.756024] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:12.213 [2024-11-29 14:35:53.764148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:12.471 Validate MD5 checksum, iteration 1 00:28:12.471 14:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.471 14:35:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:12.471 14:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:12.471 14:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:12.471 14:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:12.471 14:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:12.471 14:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:12.471 14:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:12.471 14:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:12.471 14:35:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:12.472 14:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:12.472 14:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:12.472 14:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:12.472 14:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:12.472 14:35:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:12.730 [2024-11-29 14:35:54.303511] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:12.730 [2024-11-29 14:35:54.303628] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92963 ] 00:28:12.730 [2024-11-29 14:35:54.451134] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.730 [2024-11-29 14:35:54.484091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.107  [2024-11-29T14:35:56.837Z] Copying: 573/1024 [MB] (573 MBps) [2024-11-29T14:35:58.741Z] Copying: 1024/1024 [MB] (average 553 MBps) 00:28:16.947 00:28:16.947 14:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:16.947 14:35:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:18.848 14:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:18.848 Validate MD5 checksum, iteration 2 00:28:18.848 14:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=559d79a732bbd5f06301cc6b3aa7aa9c 00:28:18.848 14:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 559d79a732bbd5f06301cc6b3aa7aa9c != \5\5\9\d\7\9\a\7\3\2\b\b\d\5\f\0\6\3\0\1\c\c\6\b\3\a\a\7\a\a\9\c ]] 00:28:18.848 14:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:18.848 14:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:18.848 14:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:18.848 14:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:18.848 14:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:18.848 14:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:18.848 14:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:18.848 14:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:18.848 14:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:19.106 [2024-11-29 14:36:00.644051] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:19.106 [2024-11-29 14:36:00.644172] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93031 ] 00:28:19.106 [2024-11-29 14:36:00.789933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.106 [2024-11-29 14:36:00.826191] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.541  [2024-11-29T14:36:02.907Z] Copying: 675/1024 [MB] (675 MBps) [2024-11-29T14:36:03.167Z] Copying: 1024/1024 [MB] (average 663 MBps) 00:28:21.373 00:28:21.373 14:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:21.373 14:36:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3db89e31c5dffef94516bb3363376b8e 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3db89e31c5dffef94516bb3363376b8e != \3\d\b\8\9\e\3\1\c\5\d\f\f\e\f\9\4\5\1\6\b\b\3\3\6\3\3\7\6\b\8\e ]] 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92924 ]] 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92924 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92924 ']' 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92924 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92924 00:28:23.906 killing process with pid 92924 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92924' 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92924 00:28:23.906 14:36:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92924 00:28:23.906 [2024-11-29 14:36:05.415872] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:23.906 [2024-11-29 14:36:05.421840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.906 [2024-11-29 14:36:05.421875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:23.906 [2024-11-29 14:36:05.421887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:23.906 [2024-11-29 14:36:05.421894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.906 [2024-11-29 14:36:05.421912] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:23.906 [2024-11-29 14:36:05.422431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.907 [2024-11-29 14:36:05.422452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:23.907 [2024-11-29 14:36:05.422461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.507 ms 00:28:23.907 [2024-11-29 14:36:05.422470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.422666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.907 [2024-11-29 14:36:05.422676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:23.907 [2024-11-29 14:36:05.422684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.166 ms 00:28:23.907 [2024-11-29 14:36:05.422691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.424187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.907 [2024-11-29 14:36:05.424212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:23.907 [2024-11-29 14:36:05.424220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.483 ms 00:28:23.907 [2024-11-29 14:36:05.424226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.425104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.907 [2024-11-29 14:36:05.425126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:23.907 [2024-11-29 14:36:05.425135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.850 ms 00:28:23.907 [2024-11-29 14:36:05.425142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.427632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.907 [2024-11-29 14:36:05.427658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:23.907 [2024-11-29 14:36:05.427666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.462 ms 00:28:23.907 [2024-11-29 14:36:05.427672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.429320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.907 [2024-11-29 14:36:05.429347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:23.907 [2024-11-29 14:36:05.429361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.615 ms 00:28:23.907 [2024-11-29 14:36:05.429367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.429430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.907 [2024-11-29 14:36:05.429437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:23.907 [2024-11-29 14:36:05.429445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:23.907 [2024-11-29 14:36:05.429451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.431581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.907 [2024-11-29 14:36:05.431606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:23.907 [2024-11-29 14:36:05.431613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.117 ms 00:28:23.907 [2024-11-29 14:36:05.431618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.433714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.907 [2024-11-29 14:36:05.433738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:23.907 [2024-11-29 14:36:05.433745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.070 ms 00:28:23.907 [2024-11-29 14:36:05.433751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.435610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.907 [2024-11-29 14:36:05.435635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:23.907 [2024-11-29 14:36:05.435643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.834 ms 00:28:23.907 [2024-11-29 14:36:05.435648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.437450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.907 [2024-11-29 14:36:05.437476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:23.907 [2024-11-29 14:36:05.437483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.741 ms 00:28:23.907 [2024-11-29 14:36:05.437499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.437523] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:23.907 [2024-11-29 14:36:05.437540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:23.907 [2024-11-29 14:36:05.437549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:23.907 [2024-11-29 14:36:05.437555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:23.907 [2024-11-29 14:36:05.437562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:23.907 [2024-11-29 14:36:05.437654] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:23.907 [2024-11-29 14:36:05.437661] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 8caeb5f4-05e9-41b1-91be-745cb7348607 00:28:23.907 [2024-11-29 14:36:05.437668] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:23.907 [2024-11-29 14:36:05.437674] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:23.907 [2024-11-29 14:36:05.437679] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:23.907 [2024-11-29 14:36:05.437686] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:23.907 [2024-11-29 14:36:05.437691] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:23.907 [2024-11-29 14:36:05.437698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:23.907 [2024-11-29 14:36:05.437705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:23.907 [2024-11-29 14:36:05.437711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:23.907 [2024-11-29 14:36:05.437716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:23.907 [2024-11-29 14:36:05.437721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.907 [2024-11-29 14:36:05.437727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:23.907 [2024-11-29 14:36:05.437736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:28:23.907 [2024-11-29 14:36:05.437742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.439408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.907 [2024-11-29 14:36:05.439432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:23.907 [2024-11-29 14:36:05.439439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.654 ms 00:28:23.907 [2024-11-29 14:36:05.439446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.439547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.907 [2024-11-29 14:36:05.439557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:23.907 [2024-11-29 14:36:05.439564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.086 ms 00:28:23.907 [2024-11-29 14:36:05.439573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.445554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.907 [2024-11-29 14:36:05.445581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:23.907 [2024-11-29 14:36:05.445590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.907 [2024-11-29 14:36:05.445596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.445621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.907 [2024-11-29 14:36:05.445631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:23.907 [2024-11-29 14:36:05.445637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.907 [2024-11-29 14:36:05.445643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.445686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.907 [2024-11-29 14:36:05.445693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:23.907 [2024-11-29 14:36:05.445704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.907 [2024-11-29 14:36:05.445710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.445725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.907 [2024-11-29 14:36:05.445735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:23.907 [2024-11-29 14:36:05.445744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.907 [2024-11-29 14:36:05.445750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.456662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.907 [2024-11-29 14:36:05.456699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:23.907 [2024-11-29 14:36:05.456709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.907 [2024-11-29 14:36:05.456715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.907 [2024-11-29 14:36:05.465203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.907 [2024-11-29 14:36:05.465241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:23.907 [2024-11-29 14:36:05.465250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.907 [2024-11-29 14:36:05.465256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.908 [2024-11-29 14:36:05.465314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.908 [2024-11-29 14:36:05.465322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:23.908 [2024-11-29 14:36:05.465328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.908 [2024-11-29 14:36:05.465334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.908 [2024-11-29 14:36:05.465361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.908 [2024-11-29 14:36:05.465371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:23.908 [2024-11-29 14:36:05.465379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.908 [2024-11-29 14:36:05.465387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.908 [2024-11-29 14:36:05.465449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.908 [2024-11-29 14:36:05.465457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:23.908 [2024-11-29 14:36:05.465463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.908 [2024-11-29 14:36:05.465470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.908 [2024-11-29 14:36:05.465508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.908 [2024-11-29 14:36:05.465517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:23.908 [2024-11-29 14:36:05.465524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.908 [2024-11-29 14:36:05.465530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.908 [2024-11-29 14:36:05.465567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.908 [2024-11-29 14:36:05.465574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:23.908 [2024-11-29 14:36:05.465581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.908 [2024-11-29 14:36:05.465588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.908 [2024-11-29 14:36:05.465626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.908 [2024-11-29 14:36:05.465635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:23.908 [2024-11-29 14:36:05.465642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.908 [2024-11-29 14:36:05.465650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.908 [2024-11-29 14:36:05.465758] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 43.890 ms, result 0 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:23.908 Remove shared memory files 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92698 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:23.908 ************************************ 00:28:23.908 END TEST ftl_upgrade_shutdown 00:28:23.908 ************************************ 00:28:23.908 00:28:23.908 real 1m15.634s 00:28:23.908 user 1m39.884s 00:28:23.908 sys 0m20.449s 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:23.908 14:36:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:24.167 14:36:05 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:24.167 14:36:05 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:24.167 14:36:05 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:28:24.167 14:36:05 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:24.167 14:36:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:24.167 ************************************ 00:28:24.167 START TEST ftl_restore_fast 00:28:24.167 ************************************ 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:24.167 * Looking for test storage... 00:28:24.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:24.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.167 --rc genhtml_branch_coverage=1 00:28:24.167 --rc genhtml_function_coverage=1 00:28:24.167 --rc genhtml_legend=1 00:28:24.167 --rc geninfo_all_blocks=1 00:28:24.167 --rc geninfo_unexecuted_blocks=1 00:28:24.167 00:28:24.167 ' 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:24.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.167 --rc genhtml_branch_coverage=1 00:28:24.167 --rc genhtml_function_coverage=1 00:28:24.167 --rc genhtml_legend=1 00:28:24.167 --rc geninfo_all_blocks=1 00:28:24.167 --rc geninfo_unexecuted_blocks=1 00:28:24.167 00:28:24.167 ' 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:24.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.167 --rc genhtml_branch_coverage=1 00:28:24.167 --rc genhtml_function_coverage=1 00:28:24.167 --rc genhtml_legend=1 00:28:24.167 --rc geninfo_all_blocks=1 00:28:24.167 --rc geninfo_unexecuted_blocks=1 00:28:24.167 00:28:24.167 ' 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:24.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.167 --rc genhtml_branch_coverage=1 00:28:24.167 --rc genhtml_function_coverage=1 00:28:24.167 --rc genhtml_legend=1 00:28:24.167 --rc geninfo_all_blocks=1 00:28:24.167 --rc geninfo_unexecuted_blocks=1 00:28:24.167 00:28:24.167 ' 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.1MOmOG5F0Y 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=93163 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 93163 00:28:24.167 14:36:05 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:24.168 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 93163 ']' 00:28:24.168 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.168 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:24.168 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.168 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:24.168 14:36:05 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:24.426 [2024-11-29 14:36:05.967045] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:24.426 [2024-11-29 14:36:05.967173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93163 ] 00:28:24.426 [2024-11-29 14:36:06.110708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.426 [2024-11-29 14:36:06.151567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.992 14:36:06 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.992 14:36:06 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:28:24.992 14:36:06 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:24.992 14:36:06 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:24.992 14:36:06 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:24.992 14:36:06 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:24.992 14:36:06 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:24.992 14:36:06 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:25.251 14:36:07 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:25.251 14:36:07 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:25.251 14:36:07 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:25.251 14:36:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:25.251 14:36:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:25.251 14:36:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:25.251 14:36:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:25.251 14:36:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:25.509 14:36:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:25.509 { 00:28:25.509 "name": "nvme0n1", 00:28:25.509 "aliases": [ 00:28:25.509 "ad359684-0077-4fc7-ad2f-11882e2eeaf4" 00:28:25.509 ], 00:28:25.509 "product_name": "NVMe disk", 00:28:25.509 "block_size": 4096, 00:28:25.509 "num_blocks": 1310720, 00:28:25.509 "uuid": "ad359684-0077-4fc7-ad2f-11882e2eeaf4", 00:28:25.509 "numa_id": -1, 00:28:25.509 "assigned_rate_limits": { 00:28:25.509 "rw_ios_per_sec": 0, 00:28:25.509 "rw_mbytes_per_sec": 0, 00:28:25.509 "r_mbytes_per_sec": 0, 00:28:25.509 "w_mbytes_per_sec": 0 00:28:25.509 }, 00:28:25.509 "claimed": true, 00:28:25.509 "claim_type": "read_many_write_one", 00:28:25.509 "zoned": false, 00:28:25.509 "supported_io_types": { 00:28:25.509 "read": true, 00:28:25.509 "write": true, 00:28:25.509 "unmap": true, 00:28:25.509 "flush": true, 00:28:25.509 "reset": true, 00:28:25.509 "nvme_admin": true, 00:28:25.509 "nvme_io": true, 00:28:25.509 "nvme_io_md": false, 00:28:25.509 "write_zeroes": true, 00:28:25.509 "zcopy": false, 00:28:25.509 "get_zone_info": false, 00:28:25.509 "zone_management": false, 00:28:25.509 "zone_append": false, 00:28:25.509 "compare": true, 00:28:25.509 "compare_and_write": false, 00:28:25.509 "abort": true, 00:28:25.509 "seek_hole": false, 00:28:25.509 "seek_data": false, 00:28:25.509 "copy": true, 00:28:25.509 "nvme_iov_md": false 00:28:25.509 }, 00:28:25.509 "driver_specific": { 00:28:25.509 "nvme": [ 00:28:25.509 { 00:28:25.509 "pci_address": "0000:00:11.0", 00:28:25.509 "trid": { 00:28:25.509 "trtype": "PCIe", 00:28:25.509 "traddr": "0000:00:11.0" 00:28:25.509 }, 00:28:25.509 "ctrlr_data": { 00:28:25.509 "cntlid": 0, 00:28:25.509 "vendor_id": "0x1b36", 00:28:25.509 "model_number": "QEMU NVMe Ctrl", 00:28:25.509 "serial_number": "12341", 00:28:25.509 "firmware_revision": "8.0.0", 00:28:25.509 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:25.509 "oacs": { 00:28:25.509 "security": 0, 00:28:25.509 "format": 1, 00:28:25.509 "firmware": 0, 00:28:25.509 "ns_manage": 1 00:28:25.509 }, 00:28:25.509 "multi_ctrlr": false, 00:28:25.509 "ana_reporting": false 00:28:25.509 }, 00:28:25.509 "vs": { 00:28:25.509 "nvme_version": "1.4" 00:28:25.509 }, 00:28:25.509 "ns_data": { 00:28:25.509 "id": 1, 00:28:25.509 "can_share": false 00:28:25.509 } 00:28:25.509 } 00:28:25.509 ], 00:28:25.509 "mp_policy": "active_passive" 00:28:25.509 } 00:28:25.509 } 00:28:25.509 ]' 00:28:25.509 14:36:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:25.509 14:36:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:25.509 14:36:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:25.509 14:36:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:25.509 14:36:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:25.509 14:36:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:28:25.509 14:36:07 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:25.509 14:36:07 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:25.509 14:36:07 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:25.509 14:36:07 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:25.509 14:36:07 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:25.768 14:36:07 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=a186f17b-624a-4efb-9e8c-6d6c984535fc 00:28:25.768 14:36:07 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:25.768 14:36:07 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a186f17b-624a-4efb-9e8c-6d6c984535fc 00:28:26.027 14:36:07 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:26.285 14:36:07 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=64bea538-3f5e-4d6f-8b62-a20df8f3af72 00:28:26.285 14:36:07 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 64bea538-3f5e-4d6f-8b62-a20df8f3af72 00:28:26.285 14:36:08 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3 00:28:26.285 14:36:08 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:26.285 14:36:08 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3 00:28:26.285 14:36:08 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:26.285 14:36:08 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:26.285 14:36:08 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3 00:28:26.285 14:36:08 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:26.285 14:36:08 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3 00:28:26.285 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3 00:28:26.285 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:26.285 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:26.285 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:26.285 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3 00:28:26.544 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:26.544 { 00:28:26.544 "name": "0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3", 00:28:26.544 "aliases": [ 00:28:26.544 "lvs/nvme0n1p0" 00:28:26.544 ], 00:28:26.544 "product_name": "Logical Volume", 00:28:26.544 "block_size": 4096, 00:28:26.544 "num_blocks": 26476544, 00:28:26.544 "uuid": "0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3", 00:28:26.544 "assigned_rate_limits": { 00:28:26.544 "rw_ios_per_sec": 0, 00:28:26.544 "rw_mbytes_per_sec": 0, 00:28:26.544 "r_mbytes_per_sec": 0, 00:28:26.544 "w_mbytes_per_sec": 0 00:28:26.544 }, 00:28:26.544 "claimed": false, 00:28:26.544 "zoned": false, 00:28:26.544 "supported_io_types": { 00:28:26.544 "read": true, 00:28:26.544 "write": true, 00:28:26.544 "unmap": true, 00:28:26.544 "flush": false, 00:28:26.544 "reset": true, 00:28:26.544 "nvme_admin": false, 00:28:26.544 "nvme_io": false, 00:28:26.544 "nvme_io_md": false, 00:28:26.544 "write_zeroes": true, 00:28:26.544 "zcopy": false, 00:28:26.544 "get_zone_info": false, 00:28:26.544 "zone_management": false, 00:28:26.544 "zone_append": false, 00:28:26.544 "compare": false, 00:28:26.544 "compare_and_write": false, 00:28:26.544 "abort": false, 00:28:26.544 "seek_hole": true, 00:28:26.544 "seek_data": true, 00:28:26.544 "copy": false, 00:28:26.544 "nvme_iov_md": false 00:28:26.544 }, 00:28:26.544 "driver_specific": { 00:28:26.544 "lvol": { 00:28:26.544 "lvol_store_uuid": "64bea538-3f5e-4d6f-8b62-a20df8f3af72", 00:28:26.544 "base_bdev": "nvme0n1", 00:28:26.544 "thin_provision": true, 00:28:26.544 "num_allocated_clusters": 0, 00:28:26.544 "snapshot": false, 00:28:26.544 "clone": false, 00:28:26.544 "esnap_clone": false 00:28:26.544 } 00:28:26.544 } 00:28:26.544 } 00:28:26.544 ]' 00:28:26.544 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:26.544 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:26.544 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:26.544 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:26.544 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:26.544 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:26.544 14:36:08 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:26.544 14:36:08 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:26.544 14:36:08 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:26.802 14:36:08 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:26.802 14:36:08 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:26.802 14:36:08 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3 00:28:26.802 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3 00:28:26.802 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:26.802 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:26.802 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:26.802 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3 00:28:27.061 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:27.061 { 00:28:27.061 "name": "0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3", 00:28:27.061 "aliases": [ 00:28:27.061 "lvs/nvme0n1p0" 00:28:27.061 ], 00:28:27.061 "product_name": "Logical Volume", 00:28:27.061 "block_size": 4096, 00:28:27.061 "num_blocks": 26476544, 00:28:27.061 "uuid": "0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3", 00:28:27.061 "assigned_rate_limits": { 00:28:27.061 "rw_ios_per_sec": 0, 00:28:27.061 "rw_mbytes_per_sec": 0, 00:28:27.061 "r_mbytes_per_sec": 0, 00:28:27.061 "w_mbytes_per_sec": 0 00:28:27.061 }, 00:28:27.061 "claimed": false, 00:28:27.061 "zoned": false, 00:28:27.061 "supported_io_types": { 00:28:27.061 "read": true, 00:28:27.061 "write": true, 00:28:27.061 "unmap": true, 00:28:27.061 "flush": false, 00:28:27.061 "reset": true, 00:28:27.061 "nvme_admin": false, 00:28:27.061 "nvme_io": false, 00:28:27.061 "nvme_io_md": false, 00:28:27.061 "write_zeroes": true, 00:28:27.061 "zcopy": false, 00:28:27.061 "get_zone_info": false, 00:28:27.061 "zone_management": false, 00:28:27.061 "zone_append": false, 00:28:27.061 "compare": false, 00:28:27.061 "compare_and_write": false, 00:28:27.061 "abort": false, 00:28:27.061 "seek_hole": true, 00:28:27.061 "seek_data": true, 00:28:27.061 "copy": false, 00:28:27.061 "nvme_iov_md": false 00:28:27.061 }, 00:28:27.061 "driver_specific": { 00:28:27.061 "lvol": { 00:28:27.061 "lvol_store_uuid": "64bea538-3f5e-4d6f-8b62-a20df8f3af72", 00:28:27.061 "base_bdev": "nvme0n1", 00:28:27.061 "thin_provision": true, 00:28:27.061 "num_allocated_clusters": 0, 00:28:27.061 "snapshot": false, 00:28:27.061 "clone": false, 00:28:27.061 "esnap_clone": false 00:28:27.061 } 00:28:27.061 } 00:28:27.061 } 00:28:27.061 ]' 00:28:27.061 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:27.061 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:27.061 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:27.061 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:27.061 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:27.061 14:36:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:27.061 14:36:08 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:27.061 14:36:08 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:27.319 14:36:09 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:27.319 14:36:09 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3 00:28:27.319 14:36:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3 00:28:27.319 14:36:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:27.319 14:36:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:27.319 14:36:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:27.319 14:36:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:27.577 { 00:28:27.577 "name": "0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3", 00:28:27.577 "aliases": [ 00:28:27.577 "lvs/nvme0n1p0" 00:28:27.577 ], 00:28:27.577 "product_name": "Logical Volume", 00:28:27.577 "block_size": 4096, 00:28:27.577 "num_blocks": 26476544, 00:28:27.577 "uuid": "0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3", 00:28:27.577 "assigned_rate_limits": { 00:28:27.577 "rw_ios_per_sec": 0, 00:28:27.577 "rw_mbytes_per_sec": 0, 00:28:27.577 "r_mbytes_per_sec": 0, 00:28:27.577 "w_mbytes_per_sec": 0 00:28:27.577 }, 00:28:27.577 "claimed": false, 00:28:27.577 "zoned": false, 00:28:27.577 "supported_io_types": { 00:28:27.577 "read": true, 00:28:27.577 "write": true, 00:28:27.577 "unmap": true, 00:28:27.577 "flush": false, 00:28:27.577 "reset": true, 00:28:27.577 "nvme_admin": false, 00:28:27.577 "nvme_io": false, 00:28:27.577 "nvme_io_md": false, 00:28:27.577 "write_zeroes": true, 00:28:27.577 "zcopy": false, 00:28:27.577 "get_zone_info": false, 00:28:27.577 "zone_management": false, 00:28:27.577 "zone_append": false, 00:28:27.577 "compare": false, 00:28:27.577 "compare_and_write": false, 00:28:27.577 "abort": false, 00:28:27.577 "seek_hole": true, 00:28:27.577 "seek_data": true, 00:28:27.577 "copy": false, 00:28:27.577 "nvme_iov_md": false 00:28:27.577 }, 00:28:27.577 "driver_specific": { 00:28:27.577 "lvol": { 00:28:27.577 "lvol_store_uuid": "64bea538-3f5e-4d6f-8b62-a20df8f3af72", 00:28:27.577 "base_bdev": "nvme0n1", 00:28:27.577 "thin_provision": true, 00:28:27.577 "num_allocated_clusters": 0, 00:28:27.577 "snapshot": false, 00:28:27.577 "clone": false, 00:28:27.577 "esnap_clone": false 00:28:27.577 } 00:28:27.577 } 00:28:27.577 } 00:28:27.577 ]' 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3 --l2p_dram_limit 10' 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:27.577 14:36:09 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0be2d68b-a963-4d9e-8ca3-bcf2ff1e22b3 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:27.838 [2024-11-29 14:36:09.474591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.838 [2024-11-29 14:36:09.474636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:27.838 [2024-11-29 14:36:09.474648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:27.838 [2024-11-29 14:36:09.474656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.838 [2024-11-29 14:36:09.474692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.838 [2024-11-29 14:36:09.474701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:27.838 [2024-11-29 14:36:09.474708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:27.838 [2024-11-29 14:36:09.474725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.838 [2024-11-29 14:36:09.474745] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:27.838 [2024-11-29 14:36:09.474926] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:27.838 [2024-11-29 14:36:09.474938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.838 [2024-11-29 14:36:09.474947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:27.838 [2024-11-29 14:36:09.475137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:28:27.838 [2024-11-29 14:36:09.475146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.838 [2024-11-29 14:36:09.475192] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f5a9f8c6-46e2-48a9-97a8-befed0908257 00:28:27.838 [2024-11-29 14:36:09.476454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.838 [2024-11-29 14:36:09.476484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:27.838 [2024-11-29 14:36:09.476504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:27.838 [2024-11-29 14:36:09.476511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.838 [2024-11-29 14:36:09.483424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.838 [2024-11-29 14:36:09.483450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:27.838 [2024-11-29 14:36:09.483459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.869 ms 00:28:27.838 [2024-11-29 14:36:09.483466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.838 [2024-11-29 14:36:09.483540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.838 [2024-11-29 14:36:09.483548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:27.838 [2024-11-29 14:36:09.483557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:27.838 [2024-11-29 14:36:09.483565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.838 [2024-11-29 14:36:09.483600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.838 [2024-11-29 14:36:09.483608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:27.838 [2024-11-29 14:36:09.483617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:27.838 [2024-11-29 14:36:09.483623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.838 [2024-11-29 14:36:09.483640] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:27.838 [2024-11-29 14:36:09.485268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.838 [2024-11-29 14:36:09.485288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:27.838 [2024-11-29 14:36:09.485298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.633 ms 00:28:27.838 [2024-11-29 14:36:09.485305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.838 [2024-11-29 14:36:09.485333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.838 [2024-11-29 14:36:09.485341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:27.838 [2024-11-29 14:36:09.485348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:27.838 [2024-11-29 14:36:09.485358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.838 [2024-11-29 14:36:09.485370] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:27.838 [2024-11-29 14:36:09.485503] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:27.838 [2024-11-29 14:36:09.485514] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:27.838 [2024-11-29 14:36:09.485526] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:27.838 [2024-11-29 14:36:09.485535] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:27.838 [2024-11-29 14:36:09.485544] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:27.838 [2024-11-29 14:36:09.485550] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:27.838 [2024-11-29 14:36:09.485562] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:27.838 [2024-11-29 14:36:09.485571] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:27.838 [2024-11-29 14:36:09.485579] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:27.838 [2024-11-29 14:36:09.485589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.839 [2024-11-29 14:36:09.485598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:27.839 [2024-11-29 14:36:09.485605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:28:27.839 [2024-11-29 14:36:09.485613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.839 [2024-11-29 14:36:09.485677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.839 [2024-11-29 14:36:09.485738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:27.839 [2024-11-29 14:36:09.485748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:27.839 [2024-11-29 14:36:09.485757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.839 [2024-11-29 14:36:09.485829] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:27.839 [2024-11-29 14:36:09.485841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:27.839 [2024-11-29 14:36:09.485852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:27.839 [2024-11-29 14:36:09.485859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.839 [2024-11-29 14:36:09.485866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:27.839 [2024-11-29 14:36:09.485875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:27.839 [2024-11-29 14:36:09.485881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:27.839 [2024-11-29 14:36:09.485888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:27.839 [2024-11-29 14:36:09.485894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:27.839 [2024-11-29 14:36:09.485901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:27.839 [2024-11-29 14:36:09.485906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:27.839 [2024-11-29 14:36:09.485912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:27.839 [2024-11-29 14:36:09.485917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:27.839 [2024-11-29 14:36:09.485926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:27.839 [2024-11-29 14:36:09.485932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:27.839 [2024-11-29 14:36:09.485939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.839 [2024-11-29 14:36:09.485945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:27.839 [2024-11-29 14:36:09.485953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:27.839 [2024-11-29 14:36:09.485961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.839 [2024-11-29 14:36:09.485968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:27.839 [2024-11-29 14:36:09.485974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:27.839 [2024-11-29 14:36:09.485981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.839 [2024-11-29 14:36:09.485986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:27.839 [2024-11-29 14:36:09.485993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:27.839 [2024-11-29 14:36:09.485999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.839 [2024-11-29 14:36:09.486006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:27.839 [2024-11-29 14:36:09.486011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:27.839 [2024-11-29 14:36:09.486018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.839 [2024-11-29 14:36:09.486024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:27.839 [2024-11-29 14:36:09.486033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:27.839 [2024-11-29 14:36:09.486038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.839 [2024-11-29 14:36:09.486046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:27.839 [2024-11-29 14:36:09.486051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:27.839 [2024-11-29 14:36:09.486058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:27.839 [2024-11-29 14:36:09.486064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:27.839 [2024-11-29 14:36:09.486071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:27.839 [2024-11-29 14:36:09.486076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:27.839 [2024-11-29 14:36:09.486083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:27.839 [2024-11-29 14:36:09.486088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:27.839 [2024-11-29 14:36:09.486095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.839 [2024-11-29 14:36:09.486100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:27.839 [2024-11-29 14:36:09.486106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:27.839 [2024-11-29 14:36:09.486111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.839 [2024-11-29 14:36:09.486117] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:27.839 [2024-11-29 14:36:09.486124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:27.839 [2024-11-29 14:36:09.486133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:27.839 [2024-11-29 14:36:09.486138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.839 [2024-11-29 14:36:09.486146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:27.839 [2024-11-29 14:36:09.486151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:27.839 [2024-11-29 14:36:09.486158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:27.839 [2024-11-29 14:36:09.486165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:27.839 [2024-11-29 14:36:09.486172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:27.839 [2024-11-29 14:36:09.486177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:27.839 [2024-11-29 14:36:09.486188] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:27.839 [2024-11-29 14:36:09.486195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:27.839 [2024-11-29 14:36:09.486205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:27.839 [2024-11-29 14:36:09.486212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:27.839 [2024-11-29 14:36:09.486220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:27.839 [2024-11-29 14:36:09.486225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:27.839 [2024-11-29 14:36:09.486234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:27.839 [2024-11-29 14:36:09.486239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:27.839 [2024-11-29 14:36:09.486247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:27.839 [2024-11-29 14:36:09.486253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:27.839 [2024-11-29 14:36:09.486260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:27.839 [2024-11-29 14:36:09.486266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:27.839 [2024-11-29 14:36:09.486274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:27.839 [2024-11-29 14:36:09.486280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:27.839 [2024-11-29 14:36:09.486288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:27.839 [2024-11-29 14:36:09.486294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:27.839 [2024-11-29 14:36:09.486300] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:27.839 [2024-11-29 14:36:09.486309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:27.839 [2024-11-29 14:36:09.486317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:27.839 [2024-11-29 14:36:09.486324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:27.839 [2024-11-29 14:36:09.486331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:27.839 [2024-11-29 14:36:09.486337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:27.839 [2024-11-29 14:36:09.486344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.839 [2024-11-29 14:36:09.486351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:27.839 [2024-11-29 14:36:09.486360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:28:27.839 [2024-11-29 14:36:09.486366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.839 [2024-11-29 14:36:09.486400] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:27.839 [2024-11-29 14:36:09.486408] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:32.039 [2024-11-29 14:36:13.541677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.039 [2024-11-29 14:36:13.541736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:32.039 [2024-11-29 14:36:13.541754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4055.259 ms 00:28:32.039 [2024-11-29 14:36:13.541761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.039 [2024-11-29 14:36:13.552194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.039 [2024-11-29 14:36:13.552235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:32.039 [2024-11-29 14:36:13.552247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.351 ms 00:28:32.039 [2024-11-29 14:36:13.552254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.039 [2024-11-29 14:36:13.552326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.039 [2024-11-29 14:36:13.552333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:32.039 [2024-11-29 14:36:13.552345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:32.039 [2024-11-29 14:36:13.552351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.039 [2024-11-29 14:36:13.561344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.039 [2024-11-29 14:36:13.561375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:32.039 [2024-11-29 14:36:13.561386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.956 ms 00:28:32.039 [2024-11-29 14:36:13.561393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.039 [2024-11-29 14:36:13.561418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.039 [2024-11-29 14:36:13.561427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:32.039 [2024-11-29 14:36:13.561435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:32.039 [2024-11-29 14:36:13.561441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.039 [2024-11-29 14:36:13.561842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.039 [2024-11-29 14:36:13.561857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:32.039 [2024-11-29 14:36:13.561867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:28:32.039 [2024-11-29 14:36:13.561873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.039 [2024-11-29 14:36:13.561958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.039 [2024-11-29 14:36:13.561966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:32.039 [2024-11-29 14:36:13.561982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:32.039 [2024-11-29 14:36:13.561990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.039 [2024-11-29 14:36:13.576704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.039 [2024-11-29 14:36:13.576754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:32.040 [2024-11-29 14:36:13.576774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.691 ms 00:28:32.040 [2024-11-29 14:36:13.576787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.040 [2024-11-29 14:36:13.588261] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:32.040 [2024-11-29 14:36:13.591160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.040 [2024-11-29 14:36:13.591192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:32.040 [2024-11-29 14:36:13.591200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.260 ms 00:28:32.040 [2024-11-29 14:36:13.591209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.040 [2024-11-29 14:36:13.661859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.040 [2024-11-29 14:36:13.662050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:32.040 [2024-11-29 14:36:13.662066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.631 ms 00:28:32.040 [2024-11-29 14:36:13.662078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.040 [2024-11-29 14:36:13.662226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.040 [2024-11-29 14:36:13.662237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:32.040 [2024-11-29 14:36:13.662244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:28:32.040 [2024-11-29 14:36:13.662252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.040 [2024-11-29 14:36:13.665688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.040 [2024-11-29 14:36:13.665719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:32.040 [2024-11-29 14:36:13.665727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.412 ms 00:28:32.040 [2024-11-29 14:36:13.665736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.040 [2024-11-29 14:36:13.668757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.040 [2024-11-29 14:36:13.668864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:32.040 [2024-11-29 14:36:13.668877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.992 ms 00:28:32.040 [2024-11-29 14:36:13.668885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.040 [2024-11-29 14:36:13.669130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.040 [2024-11-29 14:36:13.669141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:32.040 [2024-11-29 14:36:13.669148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:28:32.040 [2024-11-29 14:36:13.669157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.040 [2024-11-29 14:36:13.703710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.040 [2024-11-29 14:36:13.703820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:32.040 [2024-11-29 14:36:13.703834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.537 ms 00:28:32.040 [2024-11-29 14:36:13.703843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.040 [2024-11-29 14:36:13.708507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.040 [2024-11-29 14:36:13.708537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:32.040 [2024-11-29 14:36:13.708545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.627 ms 00:28:32.040 [2024-11-29 14:36:13.708554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.040 [2024-11-29 14:36:13.712118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.040 [2024-11-29 14:36:13.712149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:32.040 [2024-11-29 14:36:13.712157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.536 ms 00:28:32.040 [2024-11-29 14:36:13.712165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.040 [2024-11-29 14:36:13.716355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.040 [2024-11-29 14:36:13.716385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:32.040 [2024-11-29 14:36:13.716392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.163 ms 00:28:32.040 [2024-11-29 14:36:13.716401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.040 [2024-11-29 14:36:13.716432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.040 [2024-11-29 14:36:13.716441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:32.040 [2024-11-29 14:36:13.716449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:32.040 [2024-11-29 14:36:13.716457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.040 [2024-11-29 14:36:13.716525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.040 [2024-11-29 14:36:13.716540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:32.040 [2024-11-29 14:36:13.716551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:32.040 [2024-11-29 14:36:13.716563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.040 [2024-11-29 14:36:13.717377] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4242.408 ms, result 0 00:28:32.040 { 00:28:32.040 "name": "ftl0", 00:28:32.040 "uuid": "f5a9f8c6-46e2-48a9-97a8-befed0908257" 00:28:32.040 } 00:28:32.040 14:36:13 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:32.040 14:36:13 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:32.297 14:36:13 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:32.297 14:36:13 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:32.556 [2024-11-29 14:36:14.130413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.556 [2024-11-29 14:36:14.130587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:32.556 [2024-11-29 14:36:14.130643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:32.556 [2024-11-29 14:36:14.130663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.556 [2024-11-29 14:36:14.130701] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:32.557 [2024-11-29 14:36:14.131282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.557 [2024-11-29 14:36:14.131333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:32.557 [2024-11-29 14:36:14.131484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:28:32.557 [2024-11-29 14:36:14.131509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.557 [2024-11-29 14:36:14.131717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.557 [2024-11-29 14:36:14.131728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:32.557 [2024-11-29 14:36:14.131736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:28:32.557 [2024-11-29 14:36:14.131745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.557 [2024-11-29 14:36:14.134157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.557 [2024-11-29 14:36:14.134236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:32.557 [2024-11-29 14:36:14.134247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.400 ms 00:28:32.557 [2024-11-29 14:36:14.134255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.557 [2024-11-29 14:36:14.138941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.557 [2024-11-29 14:36:14.138967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:32.557 [2024-11-29 14:36:14.138975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.671 ms 00:28:32.557 [2024-11-29 14:36:14.138984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.557 [2024-11-29 14:36:14.141021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.557 [2024-11-29 14:36:14.141118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:32.557 [2024-11-29 14:36:14.141130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.979 ms 00:28:32.557 [2024-11-29 14:36:14.141138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.557 [2024-11-29 14:36:14.146553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.557 [2024-11-29 14:36:14.146582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:32.557 [2024-11-29 14:36:14.146590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.391 ms 00:28:32.557 [2024-11-29 14:36:14.146598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.557 [2024-11-29 14:36:14.146692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.557 [2024-11-29 14:36:14.146703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:32.557 [2024-11-29 14:36:14.146711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:32.557 [2024-11-29 14:36:14.146719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.557 [2024-11-29 14:36:14.149325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.557 [2024-11-29 14:36:14.149353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:32.557 [2024-11-29 14:36:14.149361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:28:32.557 [2024-11-29 14:36:14.149369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.557 [2024-11-29 14:36:14.151277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.557 [2024-11-29 14:36:14.151306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:32.557 [2024-11-29 14:36:14.151314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.882 ms 00:28:32.557 [2024-11-29 14:36:14.151321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.557 [2024-11-29 14:36:14.152832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.557 [2024-11-29 14:36:14.152858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:32.557 [2024-11-29 14:36:14.152866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:28:32.557 [2024-11-29 14:36:14.152873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.557 [2024-11-29 14:36:14.154341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.557 [2024-11-29 14:36:14.154372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:32.557 [2024-11-29 14:36:14.154379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:28:32.557 [2024-11-29 14:36:14.154386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.557 [2024-11-29 14:36:14.154409] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:32.557 [2024-11-29 14:36:14.154423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:32.557 [2024-11-29 14:36:14.154796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.154994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:32.558 [2024-11-29 14:36:14.155154] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:32.558 [2024-11-29 14:36:14.155160] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f5a9f8c6-46e2-48a9-97a8-befed0908257 00:28:32.558 [2024-11-29 14:36:14.155168] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:32.558 [2024-11-29 14:36:14.155174] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:32.558 [2024-11-29 14:36:14.155182] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:32.558 [2024-11-29 14:36:14.155191] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:32.558 [2024-11-29 14:36:14.155199] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:32.558 [2024-11-29 14:36:14.155206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:32.558 [2024-11-29 14:36:14.155214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:32.558 [2024-11-29 14:36:14.155219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:32.558 [2024-11-29 14:36:14.155225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:32.558 [2024-11-29 14:36:14.155231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.558 [2024-11-29 14:36:14.155240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:32.558 [2024-11-29 14:36:14.155247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:28:32.558 [2024-11-29 14:36:14.155255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.558 [2024-11-29 14:36:14.157065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.558 [2024-11-29 14:36:14.157093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:32.558 [2024-11-29 14:36:14.157100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.787 ms 00:28:32.558 [2024-11-29 14:36:14.157108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.558 [2024-11-29 14:36:14.157206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.558 [2024-11-29 14:36:14.157215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:32.558 [2024-11-29 14:36:14.157222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:32.558 [2024-11-29 14:36:14.157229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.558 [2024-11-29 14:36:14.163287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.558 [2024-11-29 14:36:14.163377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:32.558 [2024-11-29 14:36:14.163417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.558 [2024-11-29 14:36:14.163439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.558 [2024-11-29 14:36:14.163504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.558 [2024-11-29 14:36:14.163525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:32.558 [2024-11-29 14:36:14.163540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.558 [2024-11-29 14:36:14.163557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.558 [2024-11-29 14:36:14.163612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.558 [2024-11-29 14:36:14.163681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:32.558 [2024-11-29 14:36:14.163700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.558 [2024-11-29 14:36:14.163717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.558 [2024-11-29 14:36:14.163742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.558 [2024-11-29 14:36:14.163763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:32.558 [2024-11-29 14:36:14.163779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.558 [2024-11-29 14:36:14.163796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.558 [2024-11-29 14:36:14.174233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.558 [2024-11-29 14:36:14.174341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:32.558 [2024-11-29 14:36:14.174379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.558 [2024-11-29 14:36:14.174402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.558 [2024-11-29 14:36:14.183236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.558 [2024-11-29 14:36:14.183361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:32.558 [2024-11-29 14:36:14.183413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.558 [2024-11-29 14:36:14.183437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.558 [2024-11-29 14:36:14.183525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.558 [2024-11-29 14:36:14.183552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:32.558 [2024-11-29 14:36:14.183568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.558 [2024-11-29 14:36:14.183624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.558 [2024-11-29 14:36:14.183670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.558 [2024-11-29 14:36:14.183690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:32.559 [2024-11-29 14:36:14.183707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.559 [2024-11-29 14:36:14.183944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.559 [2024-11-29 14:36:14.184060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.559 [2024-11-29 14:36:14.184087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:32.559 [2024-11-29 14:36:14.184105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.559 [2024-11-29 14:36:14.184156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.559 [2024-11-29 14:36:14.184201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.559 [2024-11-29 14:36:14.184225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:32.559 [2024-11-29 14:36:14.184241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.559 [2024-11-29 14:36:14.184261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.559 [2024-11-29 14:36:14.184547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.559 [2024-11-29 14:36:14.184573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:32.559 [2024-11-29 14:36:14.184591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.559 [2024-11-29 14:36:14.184648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.559 [2024-11-29 14:36:14.184727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.559 [2024-11-29 14:36:14.184751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:32.559 [2024-11-29 14:36:14.184769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.559 [2024-11-29 14:36:14.184812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.559 [2024-11-29 14:36:14.184983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.539 ms, result 0 00:28:32.559 true 00:28:32.559 14:36:14 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 93163 00:28:32.559 14:36:14 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93163 ']' 00:28:32.559 14:36:14 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93163 00:28:32.559 14:36:14 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:28:32.559 14:36:14 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:32.559 14:36:14 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93163 00:28:32.559 killing process with pid 93163 00:28:32.559 14:36:14 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:32.559 14:36:14 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:32.559 14:36:14 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93163' 00:28:32.559 14:36:14 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 93163 00:28:32.559 14:36:14 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 93163 00:28:36.761 14:36:18 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:40.959 262144+0 records in 00:28:40.959 262144+0 records out 00:28:40.959 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.13198 s, 260 MB/s 00:28:40.959 14:36:22 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:42.339 14:36:24 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:42.339 [2024-11-29 14:36:24.086121] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:42.339 [2024-11-29 14:36:24.086231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93376 ] 00:28:42.597 [2024-11-29 14:36:24.230951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.597 [2024-11-29 14:36:24.302473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.858 [2024-11-29 14:36:24.427821] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:42.858 [2024-11-29 14:36:24.427891] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:42.858 [2024-11-29 14:36:24.586287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.858 [2024-11-29 14:36:24.586335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:42.858 [2024-11-29 14:36:24.586352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:42.858 [2024-11-29 14:36:24.586364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.858 [2024-11-29 14:36:24.586416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.858 [2024-11-29 14:36:24.586426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:42.858 [2024-11-29 14:36:24.586435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:42.858 [2024-11-29 14:36:24.586442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.858 [2024-11-29 14:36:24.586462] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:42.858 [2024-11-29 14:36:24.586725] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:42.858 [2024-11-29 14:36:24.586746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.858 [2024-11-29 14:36:24.586754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:42.858 [2024-11-29 14:36:24.586768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:28:42.858 [2024-11-29 14:36:24.586779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.858 [2024-11-29 14:36:24.588117] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:42.858 [2024-11-29 14:36:24.591401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.858 [2024-11-29 14:36:24.591434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:42.858 [2024-11-29 14:36:24.591445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.286 ms 00:28:42.858 [2024-11-29 14:36:24.591453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.858 [2024-11-29 14:36:24.591526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.858 [2024-11-29 14:36:24.591537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:42.858 [2024-11-29 14:36:24.591545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:42.858 [2024-11-29 14:36:24.591559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.858 [2024-11-29 14:36:24.597975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.858 [2024-11-29 14:36:24.598003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:42.858 [2024-11-29 14:36:24.598013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.368 ms 00:28:42.858 [2024-11-29 14:36:24.598023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.858 [2024-11-29 14:36:24.598109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.858 [2024-11-29 14:36:24.598120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:42.858 [2024-11-29 14:36:24.598128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:42.858 [2024-11-29 14:36:24.598136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.858 [2024-11-29 14:36:24.598175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.858 [2024-11-29 14:36:24.598190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:42.858 [2024-11-29 14:36:24.598202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:42.858 [2024-11-29 14:36:24.598209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.858 [2024-11-29 14:36:24.598235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:42.858 [2024-11-29 14:36:24.599916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.858 [2024-11-29 14:36:24.599937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:42.858 [2024-11-29 14:36:24.599947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.692 ms 00:28:42.858 [2024-11-29 14:36:24.599955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.858 [2024-11-29 14:36:24.599985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.858 [2024-11-29 14:36:24.599993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:42.858 [2024-11-29 14:36:24.600000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:42.858 [2024-11-29 14:36:24.600007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.858 [2024-11-29 14:36:24.600035] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:42.858 [2024-11-29 14:36:24.600059] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:42.858 [2024-11-29 14:36:24.600101] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:42.858 [2024-11-29 14:36:24.600116] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:42.858 [2024-11-29 14:36:24.600221] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:42.858 [2024-11-29 14:36:24.600232] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:42.858 [2024-11-29 14:36:24.600242] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:42.858 [2024-11-29 14:36:24.600252] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:42.858 [2024-11-29 14:36:24.600264] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:42.858 [2024-11-29 14:36:24.600277] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:42.858 [2024-11-29 14:36:24.600286] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:42.858 [2024-11-29 14:36:24.600299] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:42.858 [2024-11-29 14:36:24.600306] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:42.858 [2024-11-29 14:36:24.600321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.858 [2024-11-29 14:36:24.600332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:42.858 [2024-11-29 14:36:24.600340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:28:42.858 [2024-11-29 14:36:24.600348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.859 [2024-11-29 14:36:24.600433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.859 [2024-11-29 14:36:24.600444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:42.859 [2024-11-29 14:36:24.600451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:42.859 [2024-11-29 14:36:24.600458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.859 [2024-11-29 14:36:24.600565] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:42.859 [2024-11-29 14:36:24.600578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:42.859 [2024-11-29 14:36:24.600596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:42.859 [2024-11-29 14:36:24.600615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:42.859 [2024-11-29 14:36:24.600624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:42.859 [2024-11-29 14:36:24.600637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:42.859 [2024-11-29 14:36:24.600645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:42.859 [2024-11-29 14:36:24.600654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:42.859 [2024-11-29 14:36:24.600668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:42.859 [2024-11-29 14:36:24.600677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:42.859 [2024-11-29 14:36:24.600685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:42.859 [2024-11-29 14:36:24.600700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:42.859 [2024-11-29 14:36:24.600710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:42.859 [2024-11-29 14:36:24.600723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:42.859 [2024-11-29 14:36:24.600731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:42.859 [2024-11-29 14:36:24.600739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:42.859 [2024-11-29 14:36:24.600746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:42.859 [2024-11-29 14:36:24.600754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:42.859 [2024-11-29 14:36:24.600765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:42.859 [2024-11-29 14:36:24.600777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:42.859 [2024-11-29 14:36:24.600788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:42.859 [2024-11-29 14:36:24.600799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:42.859 [2024-11-29 14:36:24.600811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:42.859 [2024-11-29 14:36:24.600820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:42.859 [2024-11-29 14:36:24.600830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:42.859 [2024-11-29 14:36:24.600838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:42.859 [2024-11-29 14:36:24.600846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:42.859 [2024-11-29 14:36:24.600853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:42.859 [2024-11-29 14:36:24.600865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:42.859 [2024-11-29 14:36:24.600873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:42.859 [2024-11-29 14:36:24.600880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:42.859 [2024-11-29 14:36:24.600889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:42.859 [2024-11-29 14:36:24.600897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:42.859 [2024-11-29 14:36:24.600904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:42.859 [2024-11-29 14:36:24.600912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:42.859 [2024-11-29 14:36:24.600920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:42.859 [2024-11-29 14:36:24.600927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:42.859 [2024-11-29 14:36:24.600934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:42.859 [2024-11-29 14:36:24.600941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:42.859 [2024-11-29 14:36:24.600947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:42.859 [2024-11-29 14:36:24.600954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:42.859 [2024-11-29 14:36:24.600960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:42.859 [2024-11-29 14:36:24.600968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:42.859 [2024-11-29 14:36:24.600976] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:42.859 [2024-11-29 14:36:24.600988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:42.859 [2024-11-29 14:36:24.600996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:42.859 [2024-11-29 14:36:24.601005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:42.859 [2024-11-29 14:36:24.601012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:42.859 [2024-11-29 14:36:24.601023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:42.859 [2024-11-29 14:36:24.601030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:42.859 [2024-11-29 14:36:24.601036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:42.859 [2024-11-29 14:36:24.601043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:42.859 [2024-11-29 14:36:24.601053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:42.859 [2024-11-29 14:36:24.601065] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:42.859 [2024-11-29 14:36:24.601075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:42.859 [2024-11-29 14:36:24.601087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:42.859 [2024-11-29 14:36:24.601094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:42.859 [2024-11-29 14:36:24.601101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:42.859 [2024-11-29 14:36:24.601108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:42.859 [2024-11-29 14:36:24.601115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:42.859 [2024-11-29 14:36:24.601124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:42.859 [2024-11-29 14:36:24.601131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:42.859 [2024-11-29 14:36:24.601138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:42.859 [2024-11-29 14:36:24.601145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:42.859 [2024-11-29 14:36:24.601153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:42.859 [2024-11-29 14:36:24.601160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:42.859 [2024-11-29 14:36:24.601167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:42.859 [2024-11-29 14:36:24.601174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:42.859 [2024-11-29 14:36:24.601181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:42.859 [2024-11-29 14:36:24.601188] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:42.859 [2024-11-29 14:36:24.601199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:42.860 [2024-11-29 14:36:24.601208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:42.860 [2024-11-29 14:36:24.601215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:42.860 [2024-11-29 14:36:24.601222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:42.860 [2024-11-29 14:36:24.601229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:42.860 [2024-11-29 14:36:24.601237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.860 [2024-11-29 14:36:24.601247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:42.860 [2024-11-29 14:36:24.601255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:28:42.860 [2024-11-29 14:36:24.601262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.860 [2024-11-29 14:36:24.622667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.860 [2024-11-29 14:36:24.622825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:42.860 [2024-11-29 14:36:24.622888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.364 ms 00:28:42.860 [2024-11-29 14:36:24.622914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.860 [2024-11-29 14:36:24.623021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.860 [2024-11-29 14:36:24.623044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:42.860 [2024-11-29 14:36:24.623065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:42.860 [2024-11-29 14:36:24.623085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.860 [2024-11-29 14:36:24.633300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.860 [2024-11-29 14:36:24.633418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:42.860 [2024-11-29 14:36:24.633474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.136 ms 00:28:42.860 [2024-11-29 14:36:24.633513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.860 [2024-11-29 14:36:24.633557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.860 [2024-11-29 14:36:24.633579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:42.860 [2024-11-29 14:36:24.633599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:42.860 [2024-11-29 14:36:24.633618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.860 [2024-11-29 14:36:24.634074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.860 [2024-11-29 14:36:24.634235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:42.860 [2024-11-29 14:36:24.634292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:28:42.860 [2024-11-29 14:36:24.634314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.860 [2024-11-29 14:36:24.634471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.860 [2024-11-29 14:36:24.634508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:42.860 [2024-11-29 14:36:24.634556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:28:42.860 [2024-11-29 14:36:24.634578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.860 [2024-11-29 14:36:24.640486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.860 [2024-11-29 14:36:24.640613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:42.860 [2024-11-29 14:36:24.640664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.876 ms 00:28:42.860 [2024-11-29 14:36:24.640686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:42.860 [2024-11-29 14:36:24.643852] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:42.860 [2024-11-29 14:36:24.643965] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:42.860 [2024-11-29 14:36:24.644033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:42.860 [2024-11-29 14:36:24.644054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:42.860 [2024-11-29 14:36:24.644073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.241 ms 00:28:42.860 [2024-11-29 14:36:24.644091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.121 [2024-11-29 14:36:24.659439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.121 [2024-11-29 14:36:24.659582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:43.121 [2024-11-29 14:36:24.659646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.048 ms 00:28:43.121 [2024-11-29 14:36:24.659683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.121 [2024-11-29 14:36:24.661832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.121 [2024-11-29 14:36:24.661938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:43.121 [2024-11-29 14:36:24.661989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.098 ms 00:28:43.121 [2024-11-29 14:36:24.662011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.121 [2024-11-29 14:36:24.663821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.121 [2024-11-29 14:36:24.663922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:43.121 [2024-11-29 14:36:24.663972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.771 ms 00:28:43.121 [2024-11-29 14:36:24.663982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.121 [2024-11-29 14:36:24.664666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.121 [2024-11-29 14:36:24.664692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:43.121 [2024-11-29 14:36:24.664704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:28:43.121 [2024-11-29 14:36:24.664712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.121 [2024-11-29 14:36:24.684876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.121 [2024-11-29 14:36:24.684919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:43.121 [2024-11-29 14:36:24.684939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.145 ms 00:28:43.121 [2024-11-29 14:36:24.684947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.121 [2024-11-29 14:36:24.693000] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:43.121 [2024-11-29 14:36:24.696148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.121 [2024-11-29 14:36:24.696186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:43.121 [2024-11-29 14:36:24.696199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.160 ms 00:28:43.121 [2024-11-29 14:36:24.696207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.121 [2024-11-29 14:36:24.696306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.121 [2024-11-29 14:36:24.696317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:43.121 [2024-11-29 14:36:24.696331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:43.121 [2024-11-29 14:36:24.696340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.121 [2024-11-29 14:36:24.696403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.121 [2024-11-29 14:36:24.696414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:43.121 [2024-11-29 14:36:24.696423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:43.121 [2024-11-29 14:36:24.696431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.121 [2024-11-29 14:36:24.696453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.121 [2024-11-29 14:36:24.696462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:43.121 [2024-11-29 14:36:24.696469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:43.121 [2024-11-29 14:36:24.696477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.121 [2024-11-29 14:36:24.696529] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:43.121 [2024-11-29 14:36:24.696540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.121 [2024-11-29 14:36:24.696552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:43.121 [2024-11-29 14:36:24.696560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:43.121 [2024-11-29 14:36:24.696568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.121 [2024-11-29 14:36:24.700732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.121 [2024-11-29 14:36:24.700769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:43.121 [2024-11-29 14:36:24.700779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.146 ms 00:28:43.121 [2024-11-29 14:36:24.700788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.121 [2024-11-29 14:36:24.700860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.121 [2024-11-29 14:36:24.700870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:43.121 [2024-11-29 14:36:24.700878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:43.121 [2024-11-29 14:36:24.700889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.121 [2024-11-29 14:36:24.702111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 115.387 ms, result 0 00:28:44.056  [2024-11-29T14:36:26.784Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-29T14:36:27.729Z] Copying: 40/1024 [MB] (25 MBps) [2024-11-29T14:36:29.115Z] Copying: 55/1024 [MB] (14 MBps) [2024-11-29T14:36:30.122Z] Copying: 70/1024 [MB] (14 MBps) [2024-11-29T14:36:30.718Z] Copying: 82/1024 [MB] (12 MBps) [2024-11-29T14:36:32.110Z] Copying: 96/1024 [MB] (13 MBps) [2024-11-29T14:36:33.055Z] Copying: 117/1024 [MB] (21 MBps) [2024-11-29T14:36:34.003Z] Copying: 134/1024 [MB] (16 MBps) [2024-11-29T14:36:34.950Z] Copying: 151/1024 [MB] (16 MBps) [2024-11-29T14:36:35.898Z] Copying: 165/1024 [MB] (14 MBps) [2024-11-29T14:36:36.847Z] Copying: 182/1024 [MB] (16 MBps) [2024-11-29T14:36:37.794Z] Copying: 198/1024 [MB] (15 MBps) [2024-11-29T14:36:38.741Z] Copying: 211/1024 [MB] (13 MBps) [2024-11-29T14:36:40.130Z] Copying: 222/1024 [MB] (10 MBps) [2024-11-29T14:36:41.072Z] Copying: 232/1024 [MB] (10 MBps) [2024-11-29T14:36:42.015Z] Copying: 257/1024 [MB] (24 MBps) [2024-11-29T14:36:42.961Z] Copying: 271/1024 [MB] (14 MBps) [2024-11-29T14:36:43.908Z] Copying: 283/1024 [MB] (11 MBps) [2024-11-29T14:36:44.855Z] Copying: 295/1024 [MB] (11 MBps) [2024-11-29T14:36:45.799Z] Copying: 311/1024 [MB] (16 MBps) [2024-11-29T14:36:46.746Z] Copying: 323/1024 [MB] (11 MBps) [2024-11-29T14:36:48.135Z] Copying: 337/1024 [MB] (14 MBps) [2024-11-29T14:36:49.081Z] Copying: 351/1024 [MB] (13 MBps) [2024-11-29T14:36:50.028Z] Copying: 367/1024 [MB] (16 MBps) [2024-11-29T14:36:50.975Z] Copying: 382/1024 [MB] (15 MBps) [2024-11-29T14:36:51.923Z] Copying: 395/1024 [MB] (13 MBps) [2024-11-29T14:36:52.871Z] Copying: 417/1024 [MB] (21 MBps) [2024-11-29T14:36:53.819Z] Copying: 429/1024 [MB] (12 MBps) [2024-11-29T14:36:54.767Z] Copying: 439/1024 [MB] (10 MBps) [2024-11-29T14:36:56.156Z] Copying: 450/1024 [MB] (10 MBps) [2024-11-29T14:36:56.730Z] Copying: 469/1024 [MB] (18 MBps) [2024-11-29T14:36:57.734Z] Copying: 494/1024 [MB] (25 MBps) [2024-11-29T14:36:59.119Z] Copying: 506/1024 [MB] (11 MBps) [2024-11-29T14:37:00.060Z] Copying: 550/1024 [MB] (44 MBps) [2024-11-29T14:37:01.003Z] Copying: 575/1024 [MB] (25 MBps) [2024-11-29T14:37:01.946Z] Copying: 594/1024 [MB] (19 MBps) [2024-11-29T14:37:02.893Z] Copying: 619/1024 [MB] (24 MBps) [2024-11-29T14:37:03.832Z] Copying: 640/1024 [MB] (21 MBps) [2024-11-29T14:37:04.771Z] Copying: 663/1024 [MB] (22 MBps) [2024-11-29T14:37:06.155Z] Copying: 680/1024 [MB] (16 MBps) [2024-11-29T14:37:06.729Z] Copying: 692/1024 [MB] (12 MBps) [2024-11-29T14:37:08.117Z] Copying: 711/1024 [MB] (19 MBps) [2024-11-29T14:37:09.061Z] Copying: 729/1024 [MB] (17 MBps) [2024-11-29T14:37:10.007Z] Copying: 745/1024 [MB] (15 MBps) [2024-11-29T14:37:10.947Z] Copying: 761/1024 [MB] (16 MBps) [2024-11-29T14:37:11.889Z] Copying: 778/1024 [MB] (17 MBps) [2024-11-29T14:37:12.869Z] Copying: 791/1024 [MB] (12 MBps) [2024-11-29T14:37:13.813Z] Copying: 818/1024 [MB] (26 MBps) [2024-11-29T14:37:14.758Z] Copying: 829/1024 [MB] (11 MBps) [2024-11-29T14:37:16.143Z] Copying: 843/1024 [MB] (14 MBps) [2024-11-29T14:37:16.717Z] Copying: 860/1024 [MB] (17 MBps) [2024-11-29T14:37:18.138Z] Copying: 875/1024 [MB] (14 MBps) [2024-11-29T14:37:19.082Z] Copying: 887/1024 [MB] (11 MBps) [2024-11-29T14:37:20.026Z] Copying: 898/1024 [MB] (11 MBps) [2024-11-29T14:37:20.973Z] Copying: 910/1024 [MB] (12 MBps) [2024-11-29T14:37:21.918Z] Copying: 921/1024 [MB] (10 MBps) [2024-11-29T14:37:22.866Z] Copying: 944/1024 [MB] (23 MBps) [2024-11-29T14:37:23.810Z] Copying: 965/1024 [MB] (20 MBps) [2024-11-29T14:37:24.754Z] Copying: 998/1024 [MB] (33 MBps) [2024-11-29T14:37:25.018Z] Copying: 1019/1024 [MB] (20 MBps) [2024-11-29T14:37:25.018Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-29 14:37:24.897831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.224 [2024-11-29 14:37:24.897987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:43.224 [2024-11-29 14:37:24.898063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:43.224 [2024-11-29 14:37:24.898089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.224 [2024-11-29 14:37:24.898130] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:43.224 [2024-11-29 14:37:24.899065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.224 [2024-11-29 14:37:24.899228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:43.224 [2024-11-29 14:37:24.899299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:29:43.224 [2024-11-29 14:37:24.899325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.224 [2024-11-29 14:37:24.901366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.224 [2024-11-29 14:37:24.901523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:43.224 [2024-11-29 14:37:24.901589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.997 ms 00:29:43.224 [2024-11-29 14:37:24.901612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.224 [2024-11-29 14:37:24.901655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.224 [2024-11-29 14:37:24.901692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:43.224 [2024-11-29 14:37:24.901712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:43.224 [2024-11-29 14:37:24.901731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.224 [2024-11-29 14:37:24.901798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.224 [2024-11-29 14:37:24.901821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:43.224 [2024-11-29 14:37:24.901843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:43.224 [2024-11-29 14:37:24.901903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.224 [2024-11-29 14:37:24.901932] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:43.224 [2024-11-29 14:37:24.901957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.901988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.902917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.903956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.904021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.904051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.904103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.904133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.904195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.904226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.904279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.904310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.904371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.904402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.904431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:43.224 [2024-11-29 14:37:24.904519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.904994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:43.225 [2024-11-29 14:37:24.905172] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:43.225 [2024-11-29 14:37:24.905187] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f5a9f8c6-46e2-48a9-97a8-befed0908257 00:29:43.225 [2024-11-29 14:37:24.905195] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:43.225 [2024-11-29 14:37:24.905202] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:43.225 [2024-11-29 14:37:24.905209] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:43.225 [2024-11-29 14:37:24.905223] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:43.225 [2024-11-29 14:37:24.905231] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:43.225 [2024-11-29 14:37:24.905243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:43.225 [2024-11-29 14:37:24.905250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:43.225 [2024-11-29 14:37:24.905257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:43.225 [2024-11-29 14:37:24.905263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:43.225 [2024-11-29 14:37:24.905271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.225 [2024-11-29 14:37:24.905280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:43.225 [2024-11-29 14:37:24.905289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.340 ms 00:29:43.225 [2024-11-29 14:37:24.905296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.225 [2024-11-29 14:37:24.907545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.225 [2024-11-29 14:37:24.907574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:43.225 [2024-11-29 14:37:24.907584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.224 ms 00:29:43.225 [2024-11-29 14:37:24.907593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.225 [2024-11-29 14:37:24.907708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.225 [2024-11-29 14:37:24.907716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:43.225 [2024-11-29 14:37:24.907725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:29:43.225 [2024-11-29 14:37:24.907737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.225 [2024-11-29 14:37:24.913979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.225 [2024-11-29 14:37:24.914010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:43.225 [2024-11-29 14:37:24.914020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.225 [2024-11-29 14:37:24.914028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.225 [2024-11-29 14:37:24.914082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.225 [2024-11-29 14:37:24.914091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:43.225 [2024-11-29 14:37:24.914099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.225 [2024-11-29 14:37:24.914110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.225 [2024-11-29 14:37:24.914157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.225 [2024-11-29 14:37:24.914168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:43.225 [2024-11-29 14:37:24.914175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.225 [2024-11-29 14:37:24.914183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.225 [2024-11-29 14:37:24.914198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.225 [2024-11-29 14:37:24.914206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:43.225 [2024-11-29 14:37:24.914215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.225 [2024-11-29 14:37:24.914223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.225 [2024-11-29 14:37:24.927031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.225 [2024-11-29 14:37:24.927082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:43.225 [2024-11-29 14:37:24.927093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.225 [2024-11-29 14:37:24.927102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.225 [2024-11-29 14:37:24.936918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.225 [2024-11-29 14:37:24.936978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:43.225 [2024-11-29 14:37:24.936989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.225 [2024-11-29 14:37:24.937005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.225 [2024-11-29 14:37:24.937052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.225 [2024-11-29 14:37:24.937061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:43.225 [2024-11-29 14:37:24.937069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.225 [2024-11-29 14:37:24.937077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.225 [2024-11-29 14:37:24.937102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.225 [2024-11-29 14:37:24.937116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:43.225 [2024-11-29 14:37:24.937124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.225 [2024-11-29 14:37:24.937131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.225 [2024-11-29 14:37:24.937189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.225 [2024-11-29 14:37:24.937199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:43.225 [2024-11-29 14:37:24.937208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.225 [2024-11-29 14:37:24.937216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.225 [2024-11-29 14:37:24.937250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.225 [2024-11-29 14:37:24.937260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:43.225 [2024-11-29 14:37:24.937268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.225 [2024-11-29 14:37:24.937276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.225 [2024-11-29 14:37:24.937313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.225 [2024-11-29 14:37:24.937326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:43.225 [2024-11-29 14:37:24.937334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.225 [2024-11-29 14:37:24.937342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.226 [2024-11-29 14:37:24.937391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.226 [2024-11-29 14:37:24.937402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:43.226 [2024-11-29 14:37:24.937411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.226 [2024-11-29 14:37:24.937419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.226 [2024-11-29 14:37:24.937567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 39.699 ms, result 0 00:29:44.168 00:29:44.168 00:29:44.168 14:37:25 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:44.168 [2024-11-29 14:37:25.914066] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:44.168 [2024-11-29 14:37:25.914215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94002 ] 00:29:44.430 [2024-11-29 14:37:26.066829] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.430 [2024-11-29 14:37:26.102424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.430 [2024-11-29 14:37:26.208893] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:44.430 [2024-11-29 14:37:26.208984] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:44.693 [2024-11-29 14:37:26.369356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.693 [2024-11-29 14:37:26.369420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:44.693 [2024-11-29 14:37:26.369439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:44.693 [2024-11-29 14:37:26.369447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.693 [2024-11-29 14:37:26.369529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.693 [2024-11-29 14:37:26.369542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:44.693 [2024-11-29 14:37:26.369560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:29:44.693 [2024-11-29 14:37:26.369569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.693 [2024-11-29 14:37:26.369594] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:44.693 [2024-11-29 14:37:26.370010] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:44.693 [2024-11-29 14:37:26.370050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.693 [2024-11-29 14:37:26.370058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:44.693 [2024-11-29 14:37:26.370071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:29:44.693 [2024-11-29 14:37:26.370082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.693 [2024-11-29 14:37:26.370401] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:44.693 [2024-11-29 14:37:26.370424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.693 [2024-11-29 14:37:26.370432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:44.693 [2024-11-29 14:37:26.370442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:44.693 [2024-11-29 14:37:26.370450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.693 [2024-11-29 14:37:26.370521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.693 [2024-11-29 14:37:26.370534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:44.693 [2024-11-29 14:37:26.370548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:44.693 [2024-11-29 14:37:26.370558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.693 [2024-11-29 14:37:26.370805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.693 [2024-11-29 14:37:26.370815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:44.693 [2024-11-29 14:37:26.370824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:29:44.693 [2024-11-29 14:37:26.370831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.693 [2024-11-29 14:37:26.370911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.693 [2024-11-29 14:37:26.370922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:44.693 [2024-11-29 14:37:26.370931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:44.693 [2024-11-29 14:37:26.370938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.693 [2024-11-29 14:37:26.370962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.693 [2024-11-29 14:37:26.370970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:44.693 [2024-11-29 14:37:26.370982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:44.693 [2024-11-29 14:37:26.370989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.693 [2024-11-29 14:37:26.371009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:44.693 [2024-11-29 14:37:26.373143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.693 [2024-11-29 14:37:26.373313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:44.693 [2024-11-29 14:37:26.373343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:29:44.693 [2024-11-29 14:37:26.373351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.693 [2024-11-29 14:37:26.373388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.693 [2024-11-29 14:37:26.373396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:44.693 [2024-11-29 14:37:26.373404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:44.693 [2024-11-29 14:37:26.373411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.694 [2024-11-29 14:37:26.373464] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:44.694 [2024-11-29 14:37:26.373485] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:44.694 [2024-11-29 14:37:26.373541] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:44.694 [2024-11-29 14:37:26.373557] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:44.694 [2024-11-29 14:37:26.373663] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:44.694 [2024-11-29 14:37:26.373673] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:44.694 [2024-11-29 14:37:26.373684] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:44.694 [2024-11-29 14:37:26.373695] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:44.694 [2024-11-29 14:37:26.373705] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:44.694 [2024-11-29 14:37:26.373715] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:44.694 [2024-11-29 14:37:26.373725] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:44.694 [2024-11-29 14:37:26.373732] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:44.694 [2024-11-29 14:37:26.373739] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:44.694 [2024-11-29 14:37:26.373752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.694 [2024-11-29 14:37:26.373759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:44.694 [2024-11-29 14:37:26.373767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:29:44.694 [2024-11-29 14:37:26.373775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.694 [2024-11-29 14:37:26.373858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.694 [2024-11-29 14:37:26.373867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:44.694 [2024-11-29 14:37:26.373876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:44.694 [2024-11-29 14:37:26.373887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.694 [2024-11-29 14:37:26.373994] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:44.694 [2024-11-29 14:37:26.374005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:44.694 [2024-11-29 14:37:26.374015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:44.694 [2024-11-29 14:37:26.374030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:44.694 [2024-11-29 14:37:26.374043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:44.694 [2024-11-29 14:37:26.374056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:44.694 [2024-11-29 14:37:26.374065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:44.694 [2024-11-29 14:37:26.374073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:44.694 [2024-11-29 14:37:26.374082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:44.694 [2024-11-29 14:37:26.374090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:44.694 [2024-11-29 14:37:26.374098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:44.694 [2024-11-29 14:37:26.374106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:44.694 [2024-11-29 14:37:26.374114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:44.694 [2024-11-29 14:37:26.374122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:44.694 [2024-11-29 14:37:26.374130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:44.694 [2024-11-29 14:37:26.374138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:44.694 [2024-11-29 14:37:26.374146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:44.694 [2024-11-29 14:37:26.374154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:44.694 [2024-11-29 14:37:26.374161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:44.694 [2024-11-29 14:37:26.374172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:44.694 [2024-11-29 14:37:26.374179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:44.694 [2024-11-29 14:37:26.374187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:44.694 [2024-11-29 14:37:26.374195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:44.694 [2024-11-29 14:37:26.374203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:44.694 [2024-11-29 14:37:26.374210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:44.694 [2024-11-29 14:37:26.374218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:44.694 [2024-11-29 14:37:26.374225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:44.694 [2024-11-29 14:37:26.374232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:44.694 [2024-11-29 14:37:26.374239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:44.694 [2024-11-29 14:37:26.374245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:44.694 [2024-11-29 14:37:26.374252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:44.694 [2024-11-29 14:37:26.374259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:44.694 [2024-11-29 14:37:26.374265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:44.694 [2024-11-29 14:37:26.374271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:44.694 [2024-11-29 14:37:26.374278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:44.694 [2024-11-29 14:37:26.374293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:44.694 [2024-11-29 14:37:26.374299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:44.694 [2024-11-29 14:37:26.374306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:44.694 [2024-11-29 14:37:26.374312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:44.694 [2024-11-29 14:37:26.374320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:44.694 [2024-11-29 14:37:26.374327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:44.694 [2024-11-29 14:37:26.374334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:44.694 [2024-11-29 14:37:26.374340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:44.694 [2024-11-29 14:37:26.374347] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:44.694 [2024-11-29 14:37:26.374355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:44.694 [2024-11-29 14:37:26.374366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:44.694 [2024-11-29 14:37:26.374373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:44.694 [2024-11-29 14:37:26.374381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:44.694 [2024-11-29 14:37:26.374387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:44.694 [2024-11-29 14:37:26.374394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:44.694 [2024-11-29 14:37:26.374401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:44.694 [2024-11-29 14:37:26.374410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:44.694 [2024-11-29 14:37:26.374417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:44.694 [2024-11-29 14:37:26.374425] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:44.694 [2024-11-29 14:37:26.374440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:44.694 [2024-11-29 14:37:26.374450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:44.694 [2024-11-29 14:37:26.374458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:44.694 [2024-11-29 14:37:26.374465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:44.694 [2024-11-29 14:37:26.374471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:44.694 [2024-11-29 14:37:26.374478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:44.694 [2024-11-29 14:37:26.374485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:44.694 [2024-11-29 14:37:26.374506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:44.694 [2024-11-29 14:37:26.374513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:44.694 [2024-11-29 14:37:26.374520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:44.694 [2024-11-29 14:37:26.374527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:44.694 [2024-11-29 14:37:26.374534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:44.694 [2024-11-29 14:37:26.374541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:44.694 [2024-11-29 14:37:26.374551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:44.694 [2024-11-29 14:37:26.374559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:44.694 [2024-11-29 14:37:26.374566] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:44.694 [2024-11-29 14:37:26.374578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:44.694 [2024-11-29 14:37:26.374587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:44.694 [2024-11-29 14:37:26.374595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:44.695 [2024-11-29 14:37:26.374603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:44.695 [2024-11-29 14:37:26.374611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:44.695 [2024-11-29 14:37:26.374618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.374626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:44.695 [2024-11-29 14:37:26.374635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:29:44.695 [2024-11-29 14:37:26.374642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.391673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.391863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:44.695 [2024-11-29 14:37:26.391891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.988 ms 00:29:44.695 [2024-11-29 14:37:26.391900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.391992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.392002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:44.695 [2024-11-29 14:37:26.392011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:44.695 [2024-11-29 14:37:26.392018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.404173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.404225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:44.695 [2024-11-29 14:37:26.404242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.085 ms 00:29:44.695 [2024-11-29 14:37:26.404251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.404290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.404300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:44.695 [2024-11-29 14:37:26.404310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:44.695 [2024-11-29 14:37:26.404319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.404426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.404445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:44.695 [2024-11-29 14:37:26.404456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:44.695 [2024-11-29 14:37:26.404469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.404636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.404648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:44.695 [2024-11-29 14:37:26.404663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:29:44.695 [2024-11-29 14:37:26.404679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.411831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.411876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:44.695 [2024-11-29 14:37:26.411886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.130 ms 00:29:44.695 [2024-11-29 14:37:26.411900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.412015] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:44.695 [2024-11-29 14:37:26.412028] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:44.695 [2024-11-29 14:37:26.412038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.412051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:44.695 [2024-11-29 14:37:26.412061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:29:44.695 [2024-11-29 14:37:26.412068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.424362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.424409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:44.695 [2024-11-29 14:37:26.424419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.277 ms 00:29:44.695 [2024-11-29 14:37:26.424427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.424574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.424585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:44.695 [2024-11-29 14:37:26.424600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:29:44.695 [2024-11-29 14:37:26.424608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.424659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.424669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:44.695 [2024-11-29 14:37:26.424677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:44.695 [2024-11-29 14:37:26.424689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.424997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.425022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:44.695 [2024-11-29 14:37:26.425030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:29:44.695 [2024-11-29 14:37:26.425038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.425056] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:44.695 [2024-11-29 14:37:26.425066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.425074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:44.695 [2024-11-29 14:37:26.425085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:44.695 [2024-11-29 14:37:26.425096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.434330] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:44.695 [2024-11-29 14:37:26.434486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.434525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:44.695 [2024-11-29 14:37:26.434534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.372 ms 00:29:44.695 [2024-11-29 14:37:26.434542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.436942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.436977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:44.695 [2024-11-29 14:37:26.436986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.377 ms 00:29:44.695 [2024-11-29 14:37:26.436994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.437089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.437099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:44.695 [2024-11-29 14:37:26.437115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:44.695 [2024-11-29 14:37:26.437123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.437145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.437156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:44.695 [2024-11-29 14:37:26.437173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:44.695 [2024-11-29 14:37:26.437180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.437211] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:44.695 [2024-11-29 14:37:26.437220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.437234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:44.695 [2024-11-29 14:37:26.437242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:44.695 [2024-11-29 14:37:26.437249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.443647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.443696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:44.695 [2024-11-29 14:37:26.443713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.378 ms 00:29:44.695 [2024-11-29 14:37:26.443720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.443805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.695 [2024-11-29 14:37:26.443816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:44.695 [2024-11-29 14:37:26.443824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:44.695 [2024-11-29 14:37:26.443832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.695 [2024-11-29 14:37:26.445018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 75.212 ms, result 0 00:29:46.085  [2024-11-29T14:37:28.821Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-29T14:37:29.764Z] Copying: 29/1024 [MB] (15 MBps) [2024-11-29T14:37:30.707Z] Copying: 46/1024 [MB] (17 MBps) [2024-11-29T14:37:31.696Z] Copying: 58/1024 [MB] (12 MBps) [2024-11-29T14:37:32.639Z] Copying: 82/1024 [MB] (23 MBps) [2024-11-29T14:37:34.026Z] Copying: 104/1024 [MB] (21 MBps) [2024-11-29T14:37:34.967Z] Copying: 120/1024 [MB] (16 MBps) [2024-11-29T14:37:35.907Z] Copying: 134/1024 [MB] (14 MBps) [2024-11-29T14:37:36.850Z] Copying: 146/1024 [MB] (11 MBps) [2024-11-29T14:37:37.791Z] Copying: 156/1024 [MB] (10 MBps) [2024-11-29T14:37:38.737Z] Copying: 167/1024 [MB] (10 MBps) [2024-11-29T14:37:39.683Z] Copying: 177/1024 [MB] (10 MBps) [2024-11-29T14:37:41.066Z] Copying: 188/1024 [MB] (10 MBps) [2024-11-29T14:37:41.637Z] Copying: 203192/1048576 [kB] (10116 kBps) [2024-11-29T14:37:43.022Z] Copying: 208/1024 [MB] (10 MBps) [2024-11-29T14:37:43.964Z] Copying: 218/1024 [MB] (10 MBps) [2024-11-29T14:37:44.905Z] Copying: 229/1024 [MB] (10 MBps) [2024-11-29T14:37:45.845Z] Copying: 245188/1048576 [kB] (10156 kBps) [2024-11-29T14:37:46.800Z] Copying: 250/1024 [MB] (11 MBps) [2024-11-29T14:37:47.744Z] Copying: 267/1024 [MB] (16 MBps) [2024-11-29T14:37:48.687Z] Copying: 277/1024 [MB] (10 MBps) [2024-11-29T14:37:49.630Z] Copying: 288/1024 [MB] (10 MBps) [2024-11-29T14:37:51.012Z] Copying: 305448/1048576 [kB] (10184 kBps) [2024-11-29T14:37:51.952Z] Copying: 308/1024 [MB] (10 MBps) [2024-11-29T14:37:52.886Z] Copying: 325972/1048576 [kB] (10176 kBps) [2024-11-29T14:37:53.820Z] Copying: 336008/1048576 [kB] (10036 kBps) [2024-11-29T14:37:54.756Z] Copying: 338/1024 [MB] (10 MBps) [2024-11-29T14:37:55.693Z] Copying: 348/1024 [MB] (10 MBps) [2024-11-29T14:37:56.629Z] Copying: 359/1024 [MB] (11 MBps) [2024-11-29T14:37:58.009Z] Copying: 370/1024 [MB] (11 MBps) [2024-11-29T14:37:58.948Z] Copying: 389924/1048576 [kB] (10164 kBps) [2024-11-29T14:37:59.886Z] Copying: 399656/1048576 [kB] (9732 kBps) [2024-11-29T14:38:00.825Z] Copying: 409836/1048576 [kB] (10180 kBps) [2024-11-29T14:38:01.765Z] Copying: 410/1024 [MB] (10 MBps) [2024-11-29T14:38:02.708Z] Copying: 421/1024 [MB] (11 MBps) [2024-11-29T14:38:03.651Z] Copying: 441812/1048576 [kB] (10128 kBps) [2024-11-29T14:38:05.036Z] Copying: 441/1024 [MB] (10 MBps) [2024-11-29T14:38:05.977Z] Copying: 462432/1048576 [kB] (10112 kBps) [2024-11-29T14:38:07.024Z] Copying: 461/1024 [MB] (10 MBps) [2024-11-29T14:38:07.996Z] Copying: 471/1024 [MB] (10 MBps) [2024-11-29T14:38:08.941Z] Copying: 482/1024 [MB] (10 MBps) [2024-11-29T14:38:09.887Z] Copying: 503116/1048576 [kB] (9428 kBps) [2024-11-29T14:38:10.828Z] Copying: 512364/1048576 [kB] (9248 kBps) [2024-11-29T14:38:11.771Z] Copying: 522300/1048576 [kB] (9936 kBps) [2024-11-29T14:38:12.717Z] Copying: 532068/1048576 [kB] (9768 kBps) [2024-11-29T14:38:13.660Z] Copying: 542024/1048576 [kB] (9956 kBps) [2024-11-29T14:38:15.045Z] Copying: 539/1024 [MB] (10 MBps) [2024-11-29T14:38:15.987Z] Copying: 562964/1048576 [kB] (10072 kBps) [2024-11-29T14:38:16.931Z] Copying: 559/1024 [MB] (10 MBps) [2024-11-29T14:38:17.947Z] Copying: 570/1024 [MB] (10 MBps) [2024-11-29T14:38:18.891Z] Copying: 581/1024 [MB] (10 MBps) [2024-11-29T14:38:19.836Z] Copying: 591/1024 [MB] (10 MBps) [2024-11-29T14:38:20.781Z] Copying: 615792/1048576 [kB] (10136 kBps) [2024-11-29T14:38:21.727Z] Copying: 611/1024 [MB] (10 MBps) [2024-11-29T14:38:22.672Z] Copying: 621/1024 [MB] (10 MBps) [2024-11-29T14:38:24.059Z] Copying: 632/1024 [MB] (10 MBps) [2024-11-29T14:38:24.631Z] Copying: 642/1024 [MB] (10 MBps) [2024-11-29T14:38:26.018Z] Copying: 668428/1048576 [kB] (10000 kBps) [2024-11-29T14:38:26.956Z] Copying: 666/1024 [MB] (13 MBps) [2024-11-29T14:38:27.888Z] Copying: 688/1024 [MB] (22 MBps) [2024-11-29T14:38:28.821Z] Copying: 711/1024 [MB] (23 MBps) [2024-11-29T14:38:29.755Z] Copying: 729/1024 [MB] (17 MBps) [2024-11-29T14:38:30.689Z] Copying: 754/1024 [MB] (24 MBps) [2024-11-29T14:38:31.707Z] Copying: 771/1024 [MB] (17 MBps) [2024-11-29T14:38:32.640Z] Copying: 788/1024 [MB] (17 MBps) [2024-11-29T14:38:34.012Z] Copying: 804/1024 [MB] (15 MBps) [2024-11-29T14:38:34.946Z] Copying: 824/1024 [MB] (19 MBps) [2024-11-29T14:38:35.880Z] Copying: 844/1024 [MB] (20 MBps) [2024-11-29T14:38:36.815Z] Copying: 861/1024 [MB] (16 MBps) [2024-11-29T14:38:37.750Z] Copying: 877/1024 [MB] (16 MBps) [2024-11-29T14:38:38.684Z] Copying: 889/1024 [MB] (11 MBps) [2024-11-29T14:38:39.632Z] Copying: 910/1024 [MB] (21 MBps) [2024-11-29T14:38:41.007Z] Copying: 934/1024 [MB] (23 MBps) [2024-11-29T14:38:41.946Z] Copying: 957/1024 [MB] (23 MBps) [2024-11-29T14:38:42.889Z] Copying: 982/1024 [MB] (25 MBps) [2024-11-29T14:38:43.830Z] Copying: 992/1024 [MB] (10 MBps) [2024-11-29T14:38:44.767Z] Copying: 1004/1024 [MB] (11 MBps) [2024-11-29T14:38:44.767Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-29 14:38:44.744920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.973 [2024-11-29 14:38:44.744988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:02.973 [2024-11-29 14:38:44.745004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:02.973 [2024-11-29 14:38:44.745013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.973 [2024-11-29 14:38:44.745041] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:02.973 [2024-11-29 14:38:44.745550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.973 [2024-11-29 14:38:44.745570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:02.973 [2024-11-29 14:38:44.745580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:31:02.973 [2024-11-29 14:38:44.745589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.973 [2024-11-29 14:38:44.745835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.973 [2024-11-29 14:38:44.745847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:02.973 [2024-11-29 14:38:44.745857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:31:02.973 [2024-11-29 14:38:44.745867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.973 [2024-11-29 14:38:44.745897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.973 [2024-11-29 14:38:44.745907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:02.973 [2024-11-29 14:38:44.745920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:02.973 [2024-11-29 14:38:44.745930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.973 [2024-11-29 14:38:44.745988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.973 [2024-11-29 14:38:44.745999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:02.973 [2024-11-29 14:38:44.746009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:31:02.973 [2024-11-29 14:38:44.746019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.973 [2024-11-29 14:38:44.746035] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:02.973 [2024-11-29 14:38:44.746049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:02.973 [2024-11-29 14:38:44.746468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:02.974 [2024-11-29 14:38:44.746976] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:02.974 [2024-11-29 14:38:44.746984] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f5a9f8c6-46e2-48a9-97a8-befed0908257 00:31:02.974 [2024-11-29 14:38:44.746996] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:02.974 [2024-11-29 14:38:44.747004] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:02.974 [2024-11-29 14:38:44.747012] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:02.974 [2024-11-29 14:38:44.747020] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:02.974 [2024-11-29 14:38:44.747028] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:02.974 [2024-11-29 14:38:44.747040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:02.974 [2024-11-29 14:38:44.747048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:02.974 [2024-11-29 14:38:44.747055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:02.974 [2024-11-29 14:38:44.747062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:02.974 [2024-11-29 14:38:44.747070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.974 [2024-11-29 14:38:44.747078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:02.974 [2024-11-29 14:38:44.747087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:31:02.974 [2024-11-29 14:38:44.747095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.974 [2024-11-29 14:38:44.748762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.974 [2024-11-29 14:38:44.748783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:02.974 [2024-11-29 14:38:44.748796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:31:02.974 [2024-11-29 14:38:44.748806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.974 [2024-11-29 14:38:44.748893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.974 [2024-11-29 14:38:44.748913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:02.974 [2024-11-29 14:38:44.748924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:31:02.974 [2024-11-29 14:38:44.748935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.974 [2024-11-29 14:38:44.754599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.974 [2024-11-29 14:38:44.754641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:02.974 [2024-11-29 14:38:44.754658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.974 [2024-11-29 14:38:44.754668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.974 [2024-11-29 14:38:44.754735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.974 [2024-11-29 14:38:44.754745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:02.974 [2024-11-29 14:38:44.754760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.974 [2024-11-29 14:38:44.754770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.974 [2024-11-29 14:38:44.754809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.974 [2024-11-29 14:38:44.754820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:02.974 [2024-11-29 14:38:44.754830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.974 [2024-11-29 14:38:44.754839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.974 [2024-11-29 14:38:44.754884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.974 [2024-11-29 14:38:44.754895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:02.974 [2024-11-29 14:38:44.754905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.974 [2024-11-29 14:38:44.754921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.974 [2024-11-29 14:38:44.764254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.974 [2024-11-29 14:38:44.764292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:02.974 [2024-11-29 14:38:44.764302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.975 [2024-11-29 14:38:44.764309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.234 [2024-11-29 14:38:44.771985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.234 [2024-11-29 14:38:44.772022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:03.234 [2024-11-29 14:38:44.772032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.234 [2024-11-29 14:38:44.772040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.234 [2024-11-29 14:38:44.772071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.234 [2024-11-29 14:38:44.772079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:03.234 [2024-11-29 14:38:44.772086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.234 [2024-11-29 14:38:44.772098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.234 [2024-11-29 14:38:44.772140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.234 [2024-11-29 14:38:44.772149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:03.234 [2024-11-29 14:38:44.772156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.234 [2024-11-29 14:38:44.772163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.234 [2024-11-29 14:38:44.772207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.234 [2024-11-29 14:38:44.772219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:03.234 [2024-11-29 14:38:44.772226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.234 [2024-11-29 14:38:44.772233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.234 [2024-11-29 14:38:44.772254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.234 [2024-11-29 14:38:44.772267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:03.234 [2024-11-29 14:38:44.772275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.234 [2024-11-29 14:38:44.772282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.234 [2024-11-29 14:38:44.772316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.234 [2024-11-29 14:38:44.772328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:03.234 [2024-11-29 14:38:44.772335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.234 [2024-11-29 14:38:44.772342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.234 [2024-11-29 14:38:44.772378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.234 [2024-11-29 14:38:44.772387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:03.234 [2024-11-29 14:38:44.772395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.234 [2024-11-29 14:38:44.772402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.234 [2024-11-29 14:38:44.772533] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 27.569 ms, result 0 00:31:03.234 00:31:03.234 00:31:03.234 14:38:44 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:05.765 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:05.765 14:38:47 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:05.765 [2024-11-29 14:38:47.162584] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:05.765 [2024-11-29 14:38:47.162710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94815 ] 00:31:05.765 [2024-11-29 14:38:47.311613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.765 [2024-11-29 14:38:47.344027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.765 [2024-11-29 14:38:47.429894] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:05.765 [2024-11-29 14:38:47.429956] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:06.025 [2024-11-29 14:38:47.586446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.025 [2024-11-29 14:38:47.586506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:06.025 [2024-11-29 14:38:47.586522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:06.025 [2024-11-29 14:38:47.586530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.025 [2024-11-29 14:38:47.586576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.025 [2024-11-29 14:38:47.586586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:06.025 [2024-11-29 14:38:47.586594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:06.025 [2024-11-29 14:38:47.586600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.025 [2024-11-29 14:38:47.586619] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:06.025 [2024-11-29 14:38:47.586858] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:06.025 [2024-11-29 14:38:47.586875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.025 [2024-11-29 14:38:47.586883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:06.025 [2024-11-29 14:38:47.586893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:31:06.025 [2024-11-29 14:38:47.586902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.025 [2024-11-29 14:38:47.587601] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:06.025 [2024-11-29 14:38:47.587642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.025 [2024-11-29 14:38:47.587652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:06.025 [2024-11-29 14:38:47.587663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:06.025 [2024-11-29 14:38:47.587677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.025 [2024-11-29 14:38:47.587771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.025 [2024-11-29 14:38:47.587785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:06.025 [2024-11-29 14:38:47.587793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:06.025 [2024-11-29 14:38:47.587804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.025 [2024-11-29 14:38:47.588049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.025 [2024-11-29 14:38:47.588060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:06.025 [2024-11-29 14:38:47.588068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:31:06.025 [2024-11-29 14:38:47.588075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.025 [2024-11-29 14:38:47.588149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.025 [2024-11-29 14:38:47.588159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:06.025 [2024-11-29 14:38:47.588167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:31:06.025 [2024-11-29 14:38:47.588174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.025 [2024-11-29 14:38:47.588203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.025 [2024-11-29 14:38:47.588212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:06.025 [2024-11-29 14:38:47.588219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:06.025 [2024-11-29 14:38:47.588227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.025 [2024-11-29 14:38:47.588245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:06.025 [2024-11-29 14:38:47.589645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.025 [2024-11-29 14:38:47.589660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:06.025 [2024-11-29 14:38:47.589675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.404 ms 00:31:06.025 [2024-11-29 14:38:47.589682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.025 [2024-11-29 14:38:47.589707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.025 [2024-11-29 14:38:47.589715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:06.025 [2024-11-29 14:38:47.589723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:06.025 [2024-11-29 14:38:47.589733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.025 [2024-11-29 14:38:47.589754] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:06.025 [2024-11-29 14:38:47.589772] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:06.025 [2024-11-29 14:38:47.589807] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:06.025 [2024-11-29 14:38:47.589821] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:06.025 [2024-11-29 14:38:47.589926] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:06.025 [2024-11-29 14:38:47.589935] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:06.025 [2024-11-29 14:38:47.589945] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:06.025 [2024-11-29 14:38:47.589955] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:06.025 [2024-11-29 14:38:47.589964] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:06.025 [2024-11-29 14:38:47.589972] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:06.025 [2024-11-29 14:38:47.589984] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:06.025 [2024-11-29 14:38:47.589991] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:06.025 [2024-11-29 14:38:47.589997] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:06.025 [2024-11-29 14:38:47.590008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.025 [2024-11-29 14:38:47.590015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:06.025 [2024-11-29 14:38:47.590023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:31:06.026 [2024-11-29 14:38:47.590030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.026 [2024-11-29 14:38:47.590115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.026 [2024-11-29 14:38:47.590122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:06.026 [2024-11-29 14:38:47.590129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:06.026 [2024-11-29 14:38:47.590139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.026 [2024-11-29 14:38:47.590244] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:06.026 [2024-11-29 14:38:47.590253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:06.026 [2024-11-29 14:38:47.590261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:06.026 [2024-11-29 14:38:47.590269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.026 [2024-11-29 14:38:47.590278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:06.026 [2024-11-29 14:38:47.590289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:06.026 [2024-11-29 14:38:47.590297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:06.026 [2024-11-29 14:38:47.590304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:06.026 [2024-11-29 14:38:47.590312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:06.026 [2024-11-29 14:38:47.590320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:06.026 [2024-11-29 14:38:47.590328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:06.026 [2024-11-29 14:38:47.590337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:06.026 [2024-11-29 14:38:47.590344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:06.026 [2024-11-29 14:38:47.590352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:06.026 [2024-11-29 14:38:47.590360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:06.026 [2024-11-29 14:38:47.590367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.026 [2024-11-29 14:38:47.590375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:06.026 [2024-11-29 14:38:47.590383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:06.026 [2024-11-29 14:38:47.590390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.026 [2024-11-29 14:38:47.590397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:06.026 [2024-11-29 14:38:47.590407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:06.026 [2024-11-29 14:38:47.590414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:06.026 [2024-11-29 14:38:47.590422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:06.026 [2024-11-29 14:38:47.590430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:06.026 [2024-11-29 14:38:47.590437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:06.026 [2024-11-29 14:38:47.590444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:06.026 [2024-11-29 14:38:47.590451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:06.026 [2024-11-29 14:38:47.590459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:06.026 [2024-11-29 14:38:47.590466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:06.026 [2024-11-29 14:38:47.590473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:06.026 [2024-11-29 14:38:47.590480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:06.026 [2024-11-29 14:38:47.590487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:06.026 [2024-11-29 14:38:47.590507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:06.026 [2024-11-29 14:38:47.590515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:06.026 [2024-11-29 14:38:47.590522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:06.026 [2024-11-29 14:38:47.590529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:06.026 [2024-11-29 14:38:47.590543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:06.026 [2024-11-29 14:38:47.590550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:06.026 [2024-11-29 14:38:47.590558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:06.026 [2024-11-29 14:38:47.590565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.026 [2024-11-29 14:38:47.590572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:06.026 [2024-11-29 14:38:47.590579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:06.026 [2024-11-29 14:38:47.590587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.026 [2024-11-29 14:38:47.590595] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:06.026 [2024-11-29 14:38:47.590604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:06.026 [2024-11-29 14:38:47.590612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:06.026 [2024-11-29 14:38:47.590620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.026 [2024-11-29 14:38:47.590632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:06.026 [2024-11-29 14:38:47.590639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:06.026 [2024-11-29 14:38:47.590647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:06.026 [2024-11-29 14:38:47.590655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:06.026 [2024-11-29 14:38:47.590662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:06.026 [2024-11-29 14:38:47.590671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:06.026 [2024-11-29 14:38:47.590680] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:06.026 [2024-11-29 14:38:47.590693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:06.026 [2024-11-29 14:38:47.590705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:06.026 [2024-11-29 14:38:47.590714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:06.026 [2024-11-29 14:38:47.590722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:06.026 [2024-11-29 14:38:47.590730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:06.026 [2024-11-29 14:38:47.590738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:06.026 [2024-11-29 14:38:47.590746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:06.026 [2024-11-29 14:38:47.590753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:06.026 [2024-11-29 14:38:47.590762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:06.026 [2024-11-29 14:38:47.590769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:06.026 [2024-11-29 14:38:47.590777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:06.026 [2024-11-29 14:38:47.590785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:06.026 [2024-11-29 14:38:47.590793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:06.026 [2024-11-29 14:38:47.590801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:06.026 [2024-11-29 14:38:47.590811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:06.026 [2024-11-29 14:38:47.590819] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:06.026 [2024-11-29 14:38:47.590828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:06.026 [2024-11-29 14:38:47.590836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:06.026 [2024-11-29 14:38:47.590844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:06.026 [2024-11-29 14:38:47.590852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:06.026 [2024-11-29 14:38:47.590860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:06.026 [2024-11-29 14:38:47.590869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.026 [2024-11-29 14:38:47.590877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:06.026 [2024-11-29 14:38:47.590885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:31:06.026 [2024-11-29 14:38:47.590893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.026 [2024-11-29 14:38:47.609040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.026 [2024-11-29 14:38:47.609366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:06.026 [2024-11-29 14:38:47.609429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.081 ms 00:31:06.026 [2024-11-29 14:38:47.609453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.026 [2024-11-29 14:38:47.609752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.026 [2024-11-29 14:38:47.609782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:06.026 [2024-11-29 14:38:47.609828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:31:06.026 [2024-11-29 14:38:47.609853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.026 [2024-11-29 14:38:47.619839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.026 [2024-11-29 14:38:47.619950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:06.026 [2024-11-29 14:38:47.619969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.830 ms 00:31:06.026 [2024-11-29 14:38:47.619976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.026 [2024-11-29 14:38:47.620003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.026 [2024-11-29 14:38:47.620011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:06.026 [2024-11-29 14:38:47.620020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:06.026 [2024-11-29 14:38:47.620026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.026 [2024-11-29 14:38:47.620107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.620116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:06.027 [2024-11-29 14:38:47.620124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:06.027 [2024-11-29 14:38:47.620137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.620245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.620252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:06.027 [2024-11-29 14:38:47.620260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:31:06.027 [2024-11-29 14:38:47.620269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.624623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.624652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:06.027 [2024-11-29 14:38:47.624662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.334 ms 00:31:06.027 [2024-11-29 14:38:47.624673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.624764] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:06.027 [2024-11-29 14:38:47.624775] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:06.027 [2024-11-29 14:38:47.624784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.624796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:06.027 [2024-11-29 14:38:47.624804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:06.027 [2024-11-29 14:38:47.624811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.637067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.637099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:06.027 [2024-11-29 14:38:47.637109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.242 ms 00:31:06.027 [2024-11-29 14:38:47.637117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.637228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.637236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:06.027 [2024-11-29 14:38:47.637244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:31:06.027 [2024-11-29 14:38:47.637251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.637297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.637305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:06.027 [2024-11-29 14:38:47.637313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:06.027 [2024-11-29 14:38:47.637324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.637639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.637650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:06.027 [2024-11-29 14:38:47.637657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:31:06.027 [2024-11-29 14:38:47.637664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.637681] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:06.027 [2024-11-29 14:38:47.637690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.637700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:06.027 [2024-11-29 14:38:47.637708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:06.027 [2024-11-29 14:38:47.637717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.645534] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:06.027 [2024-11-29 14:38:47.645649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.645658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:06.027 [2024-11-29 14:38:47.645671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.916 ms 00:31:06.027 [2024-11-29 14:38:47.645678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.647959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.647982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:06.027 [2024-11-29 14:38:47.647992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.264 ms 00:31:06.027 [2024-11-29 14:38:47.647999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.648060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.648069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:06.027 [2024-11-29 14:38:47.648077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:06.027 [2024-11-29 14:38:47.648083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.648116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.648128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:06.027 [2024-11-29 14:38:47.648135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:06.027 [2024-11-29 14:38:47.648142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.648168] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:06.027 [2024-11-29 14:38:47.648177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.648187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:06.027 [2024-11-29 14:38:47.648195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:06.027 [2024-11-29 14:38:47.648202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.652114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.652147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:06.027 [2024-11-29 14:38:47.652162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.893 ms 00:31:06.027 [2024-11-29 14:38:47.652170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.652243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.027 [2024-11-29 14:38:47.652253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:06.027 [2024-11-29 14:38:47.652263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:06.027 [2024-11-29 14:38:47.652270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.027 [2024-11-29 14:38:47.653115] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 66.279 ms, result 0 00:31:06.962  [2024-11-29T14:38:49.691Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-29T14:38:51.066Z] Copying: 34/1024 [MB] (10 MBps) [2024-11-29T14:38:51.999Z] Copying: 44/1024 [MB] (10 MBps) [2024-11-29T14:38:52.998Z] Copying: 55/1024 [MB] (10 MBps) [2024-11-29T14:38:53.938Z] Copying: 65/1024 [MB] (10 MBps) [2024-11-29T14:38:54.874Z] Copying: 76/1024 [MB] (11 MBps) [2024-11-29T14:38:55.811Z] Copying: 92/1024 [MB] (15 MBps) [2024-11-29T14:38:56.751Z] Copying: 103/1024 [MB] (11 MBps) [2024-11-29T14:38:57.688Z] Copying: 128/1024 [MB] (24 MBps) [2024-11-29T14:38:59.060Z] Copying: 172/1024 [MB] (44 MBps) [2024-11-29T14:38:59.995Z] Copying: 217/1024 [MB] (45 MBps) [2024-11-29T14:39:00.928Z] Copying: 263/1024 [MB] (46 MBps) [2024-11-29T14:39:01.861Z] Copying: 307/1024 [MB] (43 MBps) [2024-11-29T14:39:02.797Z] Copying: 353/1024 [MB] (45 MBps) [2024-11-29T14:39:03.812Z] Copying: 398/1024 [MB] (45 MBps) [2024-11-29T14:39:04.746Z] Copying: 448/1024 [MB] (50 MBps) [2024-11-29T14:39:05.680Z] Copying: 498/1024 [MB] (49 MBps) [2024-11-29T14:39:07.055Z] Copying: 544/1024 [MB] (45 MBps) [2024-11-29T14:39:07.992Z] Copying: 589/1024 [MB] (45 MBps) [2024-11-29T14:39:08.952Z] Copying: 637/1024 [MB] (47 MBps) [2024-11-29T14:39:09.885Z] Copying: 685/1024 [MB] (48 MBps) [2024-11-29T14:39:10.818Z] Copying: 730/1024 [MB] (45 MBps) [2024-11-29T14:39:11.750Z] Copying: 775/1024 [MB] (45 MBps) [2024-11-29T14:39:12.682Z] Copying: 824/1024 [MB] (48 MBps) [2024-11-29T14:39:14.056Z] Copying: 875/1024 [MB] (51 MBps) [2024-11-29T14:39:14.696Z] Copying: 918/1024 [MB] (42 MBps) [2024-11-29T14:39:16.082Z] Copying: 963/1024 [MB] (44 MBps) [2024-11-29T14:39:17.028Z] Copying: 1006/1024 [MB] (43 MBps) [2024-11-29T14:39:17.288Z] Copying: 1023/1024 [MB] (16 MBps) [2024-11-29T14:39:17.288Z] Copying: 1024/1024 [MB] (average 34 MBps)[2024-11-29 14:39:17.088653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.494 [2024-11-29 14:39:17.088714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:35.494 [2024-11-29 14:39:17.088728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:35.494 [2024-11-29 14:39:17.088737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.494 [2024-11-29 14:39:17.090846] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:35.494 [2024-11-29 14:39:17.093615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.494 [2024-11-29 14:39:17.093646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:35.494 [2024-11-29 14:39:17.093656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.735 ms 00:31:35.494 [2024-11-29 14:39:17.093664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.494 [2024-11-29 14:39:17.102262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.494 [2024-11-29 14:39:17.102294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:35.494 [2024-11-29 14:39:17.102309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.735 ms 00:31:35.494 [2024-11-29 14:39:17.102317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.494 [2024-11-29 14:39:17.102342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.494 [2024-11-29 14:39:17.102351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:35.494 [2024-11-29 14:39:17.102359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:35.494 [2024-11-29 14:39:17.102371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.494 [2024-11-29 14:39:17.102415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.494 [2024-11-29 14:39:17.102423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:35.494 [2024-11-29 14:39:17.102430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:35.494 [2024-11-29 14:39:17.102440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.494 [2024-11-29 14:39:17.102452] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:35.494 [2024-11-29 14:39:17.102463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129792 / 261120 wr_cnt: 1 state: open 00:31:35.494 [2024-11-29 14:39:17.102473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:35.494 [2024-11-29 14:39:17.102481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:35.494 [2024-11-29 14:39:17.102488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.102993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:35.495 [2024-11-29 14:39:17.103163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:35.496 [2024-11-29 14:39:17.103170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:35.496 [2024-11-29 14:39:17.103177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:35.496 [2024-11-29 14:39:17.103184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:35.496 [2024-11-29 14:39:17.103191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:35.496 [2024-11-29 14:39:17.103199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:35.496 [2024-11-29 14:39:17.103207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:35.496 [2024-11-29 14:39:17.103222] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:35.496 [2024-11-29 14:39:17.103229] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f5a9f8c6-46e2-48a9-97a8-befed0908257 00:31:35.496 [2024-11-29 14:39:17.103240] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129792 00:31:35.496 [2024-11-29 14:39:17.103247] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129824 00:31:35.496 [2024-11-29 14:39:17.103254] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129792 00:31:35.496 [2024-11-29 14:39:17.103261] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:31:35.496 [2024-11-29 14:39:17.103272] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:35.496 [2024-11-29 14:39:17.103279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:35.496 [2024-11-29 14:39:17.103288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:35.496 [2024-11-29 14:39:17.103294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:35.496 [2024-11-29 14:39:17.103300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:35.496 [2024-11-29 14:39:17.103307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.496 [2024-11-29 14:39:17.103314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:35.496 [2024-11-29 14:39:17.103321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:31:35.496 [2024-11-29 14:39:17.103331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.104738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.496 [2024-11-29 14:39:17.104759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:35.496 [2024-11-29 14:39:17.104768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.394 ms 00:31:35.496 [2024-11-29 14:39:17.104775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.104853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.496 [2024-11-29 14:39:17.104861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:35.496 [2024-11-29 14:39:17.104869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:31:35.496 [2024-11-29 14:39:17.104875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.109132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:35.496 [2024-11-29 14:39:17.109154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:35.496 [2024-11-29 14:39:17.109166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:35.496 [2024-11-29 14:39:17.109174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.109224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:35.496 [2024-11-29 14:39:17.109232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:35.496 [2024-11-29 14:39:17.109239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:35.496 [2024-11-29 14:39:17.109247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.109274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:35.496 [2024-11-29 14:39:17.109282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:35.496 [2024-11-29 14:39:17.109289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:35.496 [2024-11-29 14:39:17.109299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.109313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:35.496 [2024-11-29 14:39:17.109321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:35.496 [2024-11-29 14:39:17.109328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:35.496 [2024-11-29 14:39:17.109335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.117906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:35.496 [2024-11-29 14:39:17.117946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:35.496 [2024-11-29 14:39:17.117956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:35.496 [2024-11-29 14:39:17.117969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.125566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:35.496 [2024-11-29 14:39:17.125609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:35.496 [2024-11-29 14:39:17.125619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:35.496 [2024-11-29 14:39:17.125627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.125668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:35.496 [2024-11-29 14:39:17.125677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:35.496 [2024-11-29 14:39:17.125685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:35.496 [2024-11-29 14:39:17.125692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.125720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:35.496 [2024-11-29 14:39:17.125728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:35.496 [2024-11-29 14:39:17.125736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:35.496 [2024-11-29 14:39:17.125748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.125795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:35.496 [2024-11-29 14:39:17.125804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:35.496 [2024-11-29 14:39:17.125811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:35.496 [2024-11-29 14:39:17.125818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.125839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:35.496 [2024-11-29 14:39:17.125856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:35.496 [2024-11-29 14:39:17.125863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:35.496 [2024-11-29 14:39:17.125870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.125902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:35.496 [2024-11-29 14:39:17.125911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:35.496 [2024-11-29 14:39:17.125919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:35.496 [2024-11-29 14:39:17.125925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.125965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:35.496 [2024-11-29 14:39:17.125975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:35.496 [2024-11-29 14:39:17.125983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:35.496 [2024-11-29 14:39:17.125990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.496 [2024-11-29 14:39:17.126101] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 38.669 ms, result 0 00:31:37.412 00:31:37.412 00:31:37.412 14:39:19 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:37.412 [2024-11-29 14:39:19.074161] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:37.412 [2024-11-29 14:39:19.074477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95127 ] 00:31:37.670 [2024-11-29 14:39:19.223215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.670 [2024-11-29 14:39:19.255988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.670 [2024-11-29 14:39:19.342175] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:37.670 [2024-11-29 14:39:19.342241] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:37.929 [2024-11-29 14:39:19.494704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.929 [2024-11-29 14:39:19.494765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:37.929 [2024-11-29 14:39:19.494780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:37.929 [2024-11-29 14:39:19.494788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.929 [2024-11-29 14:39:19.494835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.929 [2024-11-29 14:39:19.494851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:37.929 [2024-11-29 14:39:19.494859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:37.929 [2024-11-29 14:39:19.494867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.929 [2024-11-29 14:39:19.494886] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:37.929 [2024-11-29 14:39:19.495130] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:37.929 [2024-11-29 14:39:19.495145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.929 [2024-11-29 14:39:19.495160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:37.929 [2024-11-29 14:39:19.495168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:31:37.929 [2024-11-29 14:39:19.495180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.929 [2024-11-29 14:39:19.495415] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:37.930 [2024-11-29 14:39:19.495436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.930 [2024-11-29 14:39:19.495443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:37.930 [2024-11-29 14:39:19.495452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:31:37.930 [2024-11-29 14:39:19.495459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.930 [2024-11-29 14:39:19.495557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.930 [2024-11-29 14:39:19.495570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:37.930 [2024-11-29 14:39:19.495579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:37.930 [2024-11-29 14:39:19.495586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.930 [2024-11-29 14:39:19.495816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.930 [2024-11-29 14:39:19.495826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:37.930 [2024-11-29 14:39:19.495834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:31:37.930 [2024-11-29 14:39:19.495843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.930 [2024-11-29 14:39:19.495914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.930 [2024-11-29 14:39:19.495929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:37.930 [2024-11-29 14:39:19.495937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:31:37.930 [2024-11-29 14:39:19.495944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.930 [2024-11-29 14:39:19.495965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.930 [2024-11-29 14:39:19.495974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:37.930 [2024-11-29 14:39:19.495981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:37.930 [2024-11-29 14:39:19.495989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.930 [2024-11-29 14:39:19.496007] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:37.930 [2024-11-29 14:39:19.497419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.930 [2024-11-29 14:39:19.497434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:37.930 [2024-11-29 14:39:19.497443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:31:37.930 [2024-11-29 14:39:19.497451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.930 [2024-11-29 14:39:19.497481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.930 [2024-11-29 14:39:19.497506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:37.930 [2024-11-29 14:39:19.497514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:37.930 [2024-11-29 14:39:19.497525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.930 [2024-11-29 14:39:19.497543] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:37.930 [2024-11-29 14:39:19.497561] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:37.930 [2024-11-29 14:39:19.497596] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:37.930 [2024-11-29 14:39:19.497614] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:37.930 [2024-11-29 14:39:19.497715] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:37.930 [2024-11-29 14:39:19.497729] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:37.930 [2024-11-29 14:39:19.497739] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:37.930 [2024-11-29 14:39:19.497748] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:37.930 [2024-11-29 14:39:19.497757] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:37.930 [2024-11-29 14:39:19.497765] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:37.930 [2024-11-29 14:39:19.497774] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:37.930 [2024-11-29 14:39:19.497781] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:37.930 [2024-11-29 14:39:19.497788] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:37.930 [2024-11-29 14:39:19.497797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.930 [2024-11-29 14:39:19.497804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:37.930 [2024-11-29 14:39:19.497812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:31:37.930 [2024-11-29 14:39:19.497822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.930 [2024-11-29 14:39:19.497903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.930 [2024-11-29 14:39:19.497910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:37.930 [2024-11-29 14:39:19.497918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:37.930 [2024-11-29 14:39:19.497930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.930 [2024-11-29 14:39:19.498024] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:37.930 [2024-11-29 14:39:19.498034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:37.930 [2024-11-29 14:39:19.498045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:37.930 [2024-11-29 14:39:19.498053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:37.930 [2024-11-29 14:39:19.498065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:37.930 [2024-11-29 14:39:19.498077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:37.930 [2024-11-29 14:39:19.498084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:37.930 [2024-11-29 14:39:19.498093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:37.930 [2024-11-29 14:39:19.498101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:37.930 [2024-11-29 14:39:19.498108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:37.930 [2024-11-29 14:39:19.498114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:37.930 [2024-11-29 14:39:19.498121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:37.930 [2024-11-29 14:39:19.498128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:37.930 [2024-11-29 14:39:19.498135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:37.930 [2024-11-29 14:39:19.498142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:37.930 [2024-11-29 14:39:19.498148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:37.930 [2024-11-29 14:39:19.498155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:37.930 [2024-11-29 14:39:19.498162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:37.930 [2024-11-29 14:39:19.498169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:37.930 [2024-11-29 14:39:19.498175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:37.930 [2024-11-29 14:39:19.498182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:37.930 [2024-11-29 14:39:19.498188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:37.930 [2024-11-29 14:39:19.498194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:37.930 [2024-11-29 14:39:19.498204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:37.930 [2024-11-29 14:39:19.498211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:37.930 [2024-11-29 14:39:19.498217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:37.930 [2024-11-29 14:39:19.498224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:37.930 [2024-11-29 14:39:19.498230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:37.930 [2024-11-29 14:39:19.498237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:37.930 [2024-11-29 14:39:19.498244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:37.930 [2024-11-29 14:39:19.498250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:37.930 [2024-11-29 14:39:19.498256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:37.930 [2024-11-29 14:39:19.498263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:37.930 [2024-11-29 14:39:19.498269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:37.930 [2024-11-29 14:39:19.498275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:37.930 [2024-11-29 14:39:19.498281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:37.930 [2024-11-29 14:39:19.498288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:37.930 [2024-11-29 14:39:19.498294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:37.930 [2024-11-29 14:39:19.498301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:37.930 [2024-11-29 14:39:19.498310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:37.930 [2024-11-29 14:39:19.498316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:37.930 [2024-11-29 14:39:19.498323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:37.930 [2024-11-29 14:39:19.498328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:37.930 [2024-11-29 14:39:19.498334] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:37.930 [2024-11-29 14:39:19.498342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:37.930 [2024-11-29 14:39:19.498350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:37.930 [2024-11-29 14:39:19.498357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:37.930 [2024-11-29 14:39:19.498364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:37.930 [2024-11-29 14:39:19.498372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:37.930 [2024-11-29 14:39:19.498378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:37.930 [2024-11-29 14:39:19.498386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:37.930 [2024-11-29 14:39:19.498392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:37.930 [2024-11-29 14:39:19.498399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:37.930 [2024-11-29 14:39:19.498407] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:37.931 [2024-11-29 14:39:19.498421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:37.931 [2024-11-29 14:39:19.498431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:37.931 [2024-11-29 14:39:19.498438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:37.931 [2024-11-29 14:39:19.498445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:37.931 [2024-11-29 14:39:19.498452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:37.931 [2024-11-29 14:39:19.498459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:37.931 [2024-11-29 14:39:19.498466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:37.931 [2024-11-29 14:39:19.498473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:37.931 [2024-11-29 14:39:19.498480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:37.931 [2024-11-29 14:39:19.498487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:37.931 [2024-11-29 14:39:19.498508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:37.931 [2024-11-29 14:39:19.498515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:37.931 [2024-11-29 14:39:19.498523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:37.931 [2024-11-29 14:39:19.498530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:37.931 [2024-11-29 14:39:19.498537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:37.931 [2024-11-29 14:39:19.498551] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:37.931 [2024-11-29 14:39:19.498559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:37.931 [2024-11-29 14:39:19.498569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:37.931 [2024-11-29 14:39:19.498577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:37.931 [2024-11-29 14:39:19.498584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:37.931 [2024-11-29 14:39:19.498591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:37.931 [2024-11-29 14:39:19.498598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.498605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:37.931 [2024-11-29 14:39:19.498613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:31:37.931 [2024-11-29 14:39:19.498620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.512451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.512507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:37.931 [2024-11-29 14:39:19.512527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.773 ms 00:31:37.931 [2024-11-29 14:39:19.512535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.512620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.512633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:37.931 [2024-11-29 14:39:19.512642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:31:37.931 [2024-11-29 14:39:19.512653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.521836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.522020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:37.931 [2024-11-29 14:39:19.522046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.128 ms 00:31:37.931 [2024-11-29 14:39:19.522057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.522105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.522117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:37.931 [2024-11-29 14:39:19.522129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:37.931 [2024-11-29 14:39:19.522139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.522229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.522242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:37.931 [2024-11-29 14:39:19.522253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:37.931 [2024-11-29 14:39:19.522267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.522420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.522432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:37.931 [2024-11-29 14:39:19.522442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:31:37.931 [2024-11-29 14:39:19.522452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.527696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.527732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:37.931 [2024-11-29 14:39:19.527741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.219 ms 00:31:37.931 [2024-11-29 14:39:19.527755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.527867] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:37.931 [2024-11-29 14:39:19.527880] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:37.931 [2024-11-29 14:39:19.527889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.527901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:37.931 [2024-11-29 14:39:19.527913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:37.931 [2024-11-29 14:39:19.527921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.540177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.540209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:37.931 [2024-11-29 14:39:19.540219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.242 ms 00:31:37.931 [2024-11-29 14:39:19.540226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.540331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.540339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:37.931 [2024-11-29 14:39:19.540347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:31:37.931 [2024-11-29 14:39:19.540356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.540398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.540407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:37.931 [2024-11-29 14:39:19.540415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:37.931 [2024-11-29 14:39:19.540425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.540754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.540773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:37.931 [2024-11-29 14:39:19.540781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:31:37.931 [2024-11-29 14:39:19.540788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.540806] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:37.931 [2024-11-29 14:39:19.540818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.540825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:37.931 [2024-11-29 14:39:19.540834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:37.931 [2024-11-29 14:39:19.540843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.548649] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:37.931 [2024-11-29 14:39:19.548764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.548779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:37.931 [2024-11-29 14:39:19.548788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.902 ms 00:31:37.931 [2024-11-29 14:39:19.548799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.551279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.551305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:37.931 [2024-11-29 14:39:19.551315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.463 ms 00:31:37.931 [2024-11-29 14:39:19.551322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.551370] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:37.931 [2024-11-29 14:39:19.551953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.552003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:37.931 [2024-11-29 14:39:19.552013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:31:37.931 [2024-11-29 14:39:19.552021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.552060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.552069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:37.931 [2024-11-29 14:39:19.552080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:37.931 [2024-11-29 14:39:19.552087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.931 [2024-11-29 14:39:19.552117] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:37.931 [2024-11-29 14:39:19.552126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.931 [2024-11-29 14:39:19.552133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:37.931 [2024-11-29 14:39:19.552140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:37.931 [2024-11-29 14:39:19.552147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.932 [2024-11-29 14:39:19.555577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.932 [2024-11-29 14:39:19.555612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:37.932 [2024-11-29 14:39:19.555622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.415 ms 00:31:37.932 [2024-11-29 14:39:19.555629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.932 [2024-11-29 14:39:19.555695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.932 [2024-11-29 14:39:19.555704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:37.932 [2024-11-29 14:39:19.555712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:37.932 [2024-11-29 14:39:19.555719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.932 [2024-11-29 14:39:19.556590] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 61.487 ms, result 0 00:31:39.305  [2024-11-29T14:39:22.063Z] Copying: 49/1024 [MB] (49 MBps) [2024-11-29T14:39:22.996Z] Copying: 97/1024 [MB] (47 MBps) [2024-11-29T14:39:23.928Z] Copying: 146/1024 [MB] (48 MBps) [2024-11-29T14:39:24.859Z] Copying: 197/1024 [MB] (50 MBps) [2024-11-29T14:39:25.792Z] Copying: 237/1024 [MB] (39 MBps) [2024-11-29T14:39:27.167Z] Copying: 271/1024 [MB] (33 MBps) [2024-11-29T14:39:27.734Z] Copying: 308/1024 [MB] (37 MBps) [2024-11-29T14:39:29.109Z] Copying: 344/1024 [MB] (35 MBps) [2024-11-29T14:39:30.043Z] Copying: 371/1024 [MB] (26 MBps) [2024-11-29T14:39:30.977Z] Copying: 394/1024 [MB] (23 MBps) [2024-11-29T14:39:31.912Z] Copying: 419/1024 [MB] (25 MBps) [2024-11-29T14:39:32.848Z] Copying: 446/1024 [MB] (26 MBps) [2024-11-29T14:39:33.783Z] Copying: 470/1024 [MB] (23 MBps) [2024-11-29T14:39:35.162Z] Copying: 500/1024 [MB] (30 MBps) [2024-11-29T14:39:36.100Z] Copying: 524/1024 [MB] (23 MBps) [2024-11-29T14:39:37.042Z] Copying: 539/1024 [MB] (15 MBps) [2024-11-29T14:39:37.983Z] Copying: 551/1024 [MB] (11 MBps) [2024-11-29T14:39:38.919Z] Copying: 562/1024 [MB] (11 MBps) [2024-11-29T14:39:39.924Z] Copying: 584/1024 [MB] (21 MBps) [2024-11-29T14:39:40.867Z] Copying: 626/1024 [MB] (42 MBps) [2024-11-29T14:39:41.807Z] Copying: 656/1024 [MB] (30 MBps) [2024-11-29T14:39:42.750Z] Copying: 691/1024 [MB] (35 MBps) [2024-11-29T14:39:44.137Z] Copying: 738/1024 [MB] (46 MBps) [2024-11-29T14:39:45.078Z] Copying: 753/1024 [MB] (15 MBps) [2024-11-29T14:39:46.011Z] Copying: 767/1024 [MB] (13 MBps) [2024-11-29T14:39:46.972Z] Copying: 786/1024 [MB] (18 MBps) [2024-11-29T14:39:47.907Z] Copying: 806/1024 [MB] (20 MBps) [2024-11-29T14:39:48.847Z] Copying: 825/1024 [MB] (19 MBps) [2024-11-29T14:39:49.790Z] Copying: 851/1024 [MB] (25 MBps) [2024-11-29T14:39:51.171Z] Copying: 873/1024 [MB] (22 MBps) [2024-11-29T14:39:51.738Z] Copying: 913/1024 [MB] (40 MBps) [2024-11-29T14:39:53.113Z] Copying: 934/1024 [MB] (21 MBps) [2024-11-29T14:39:54.048Z] Copying: 954/1024 [MB] (19 MBps) [2024-11-29T14:39:54.615Z] Copying: 995/1024 [MB] (41 MBps) [2024-11-29T14:39:54.875Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-29 14:39:54.785240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.081 [2024-11-29 14:39:54.785305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:13.081 [2024-11-29 14:39:54.785319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:13.081 [2024-11-29 14:39:54.785327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.081 [2024-11-29 14:39:54.785352] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:13.081 [2024-11-29 14:39:54.785834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.081 [2024-11-29 14:39:54.785853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:13.081 [2024-11-29 14:39:54.785862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:32:13.081 [2024-11-29 14:39:54.785869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.081 [2024-11-29 14:39:54.786084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.081 [2024-11-29 14:39:54.786102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:13.081 [2024-11-29 14:39:54.786111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:32:13.081 [2024-11-29 14:39:54.786120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.081 [2024-11-29 14:39:54.786150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.081 [2024-11-29 14:39:54.786158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:13.081 [2024-11-29 14:39:54.786167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:13.081 [2024-11-29 14:39:54.786176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.081 [2024-11-29 14:39:54.786229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.081 [2024-11-29 14:39:54.786238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:13.081 [2024-11-29 14:39:54.786248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:32:13.081 [2024-11-29 14:39:54.786256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.081 [2024-11-29 14:39:54.786273] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:13.081 [2024-11-29 14:39:54.786286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:32:13.081 [2024-11-29 14:39:54.786297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:13.081 [2024-11-29 14:39:54.786570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.786995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.787002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.787009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.787016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.787023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.787031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.787038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.787053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.787060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.787067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.787074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:13.082 [2024-11-29 14:39:54.787089] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:13.082 [2024-11-29 14:39:54.787100] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f5a9f8c6-46e2-48a9-97a8-befed0908257 00:32:13.082 [2024-11-29 14:39:54.787107] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:32:13.082 [2024-11-29 14:39:54.787118] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1312 00:32:13.082 [2024-11-29 14:39:54.787126] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1280 00:32:13.082 [2024-11-29 14:39:54.787134] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0250 00:32:13.082 [2024-11-29 14:39:54.787140] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:13.082 [2024-11-29 14:39:54.787156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:13.082 [2024-11-29 14:39:54.787174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:13.082 [2024-11-29 14:39:54.787180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:13.082 [2024-11-29 14:39:54.787187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:13.082 [2024-11-29 14:39:54.787194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.082 [2024-11-29 14:39:54.787201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:13.082 [2024-11-29 14:39:54.787208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:32:13.082 [2024-11-29 14:39:54.787215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.082 [2024-11-29 14:39:54.788709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.082 [2024-11-29 14:39:54.788734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:13.082 [2024-11-29 14:39:54.788743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:32:13.082 [2024-11-29 14:39:54.788759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.082 [2024-11-29 14:39:54.788838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.082 [2024-11-29 14:39:54.788846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:13.082 [2024-11-29 14:39:54.788854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:32:13.082 [2024-11-29 14:39:54.788860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.082 [2024-11-29 14:39:54.794633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.082 [2024-11-29 14:39:54.794658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:13.082 [2024-11-29 14:39:54.794672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.082 [2024-11-29 14:39:54.794680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.082 [2024-11-29 14:39:54.794733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.083 [2024-11-29 14:39:54.794743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:13.083 [2024-11-29 14:39:54.794750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.083 [2024-11-29 14:39:54.794757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.083 [2024-11-29 14:39:54.794788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.083 [2024-11-29 14:39:54.794797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:13.083 [2024-11-29 14:39:54.794805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.083 [2024-11-29 14:39:54.794814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.083 [2024-11-29 14:39:54.794828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.083 [2024-11-29 14:39:54.794836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:13.083 [2024-11-29 14:39:54.794847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.083 [2024-11-29 14:39:54.794854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.083 [2024-11-29 14:39:54.804949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.083 [2024-11-29 14:39:54.805111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:13.083 [2024-11-29 14:39:54.805133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.083 [2024-11-29 14:39:54.805141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.083 [2024-11-29 14:39:54.813775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.083 [2024-11-29 14:39:54.813816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:13.083 [2024-11-29 14:39:54.813827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.083 [2024-11-29 14:39:54.813835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.083 [2024-11-29 14:39:54.813883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.083 [2024-11-29 14:39:54.813892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:13.083 [2024-11-29 14:39:54.813901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.083 [2024-11-29 14:39:54.813916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.083 [2024-11-29 14:39:54.813942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.083 [2024-11-29 14:39:54.813951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:13.083 [2024-11-29 14:39:54.813960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.083 [2024-11-29 14:39:54.813969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.083 [2024-11-29 14:39:54.814016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.083 [2024-11-29 14:39:54.814025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:13.083 [2024-11-29 14:39:54.814033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.083 [2024-11-29 14:39:54.814042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.083 [2024-11-29 14:39:54.814067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.083 [2024-11-29 14:39:54.814081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:13.083 [2024-11-29 14:39:54.814093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.083 [2024-11-29 14:39:54.814101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.083 [2024-11-29 14:39:54.814136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.083 [2024-11-29 14:39:54.814145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:13.083 [2024-11-29 14:39:54.814153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.083 [2024-11-29 14:39:54.814162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.083 [2024-11-29 14:39:54.814202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.083 [2024-11-29 14:39:54.814212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:13.083 [2024-11-29 14:39:54.814221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.083 [2024-11-29 14:39:54.814229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.083 [2024-11-29 14:39:54.814341] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 29.075 ms, result 0 00:32:13.341 00:32:13.341 00:32:13.341 14:39:54 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:15.874 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 93163 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93163 ']' 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93163 00:32:15.874 Process with pid 93163 is not found 00:32:15.874 Remove shared memory files 00:32:15.874 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (93163) - No such process 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 93163 is not found' 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_f5a9f8c6-46e2-48a9-97a8-befed0908257_band_md /dev/hugepages/ftl_f5a9f8c6-46e2-48a9-97a8-befed0908257_l2p_l1 /dev/hugepages/ftl_f5a9f8c6-46e2-48a9-97a8-befed0908257_l2p_l2 /dev/hugepages/ftl_f5a9f8c6-46e2-48a9-97a8-befed0908257_l2p_l2_ctx /dev/hugepages/ftl_f5a9f8c6-46e2-48a9-97a8-befed0908257_nvc_md /dev/hugepages/ftl_f5a9f8c6-46e2-48a9-97a8-befed0908257_p2l_pool /dev/hugepages/ftl_f5a9f8c6-46e2-48a9-97a8-befed0908257_sb /dev/hugepages/ftl_f5a9f8c6-46e2-48a9-97a8-befed0908257_sb_shm /dev/hugepages/ftl_f5a9f8c6-46e2-48a9-97a8-befed0908257_trim_bitmap /dev/hugepages/ftl_f5a9f8c6-46e2-48a9-97a8-befed0908257_trim_log /dev/hugepages/ftl_f5a9f8c6-46e2-48a9-97a8-befed0908257_trim_md /dev/hugepages/ftl_f5a9f8c6-46e2-48a9-97a8-befed0908257_vmap 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:32:15.874 ************************************ 00:32:15.874 END TEST ftl_restore_fast 00:32:15.874 ************************************ 00:32:15.874 00:32:15.874 real 3m51.534s 00:32:15.874 user 3m40.152s 00:32:15.874 sys 0m11.603s 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:15.874 14:39:57 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:15.874 14:39:57 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:15.874 14:39:57 ftl -- ftl/ftl.sh@14 -- # killprocess 84166 00:32:15.874 14:39:57 ftl -- common/autotest_common.sh@950 -- # '[' -z 84166 ']' 00:32:15.874 Process with pid 84166 is not found 00:32:15.874 14:39:57 ftl -- common/autotest_common.sh@954 -- # kill -0 84166 00:32:15.874 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (84166) - No such process 00:32:15.874 14:39:57 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 84166 is not found' 00:32:15.874 14:39:57 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:15.874 14:39:57 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=95532 00:32:15.874 14:39:57 ftl -- ftl/ftl.sh@20 -- # waitforlisten 95532 00:32:15.874 14:39:57 ftl -- common/autotest_common.sh@831 -- # '[' -z 95532 ']' 00:32:15.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.874 14:39:57 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.874 14:39:57 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:15.874 14:39:57 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.874 14:39:57 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:15.874 14:39:57 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:15.874 14:39:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:15.874 [2024-11-29 14:39:57.401254] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:15.874 [2024-11-29 14:39:57.401388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95532 ] 00:32:15.874 [2024-11-29 14:39:57.551137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.874 [2024-11-29 14:39:57.585417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.808 14:39:58 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:16.808 14:39:58 ftl -- common/autotest_common.sh@864 -- # return 0 00:32:16.808 14:39:58 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:16.808 nvme0n1 00:32:16.808 14:39:58 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:16.808 14:39:58 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:16.808 14:39:58 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:17.067 14:39:58 ftl -- ftl/common.sh@28 -- # stores=64bea538-3f5e-4d6f-8b62-a20df8f3af72 00:32:17.067 14:39:58 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:17.067 14:39:58 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 64bea538-3f5e-4d6f-8b62-a20df8f3af72 00:32:17.326 14:39:58 ftl -- ftl/ftl.sh@23 -- # killprocess 95532 00:32:17.326 14:39:59 ftl -- common/autotest_common.sh@950 -- # '[' -z 95532 ']' 00:32:17.326 14:39:59 ftl -- common/autotest_common.sh@954 -- # kill -0 95532 00:32:17.326 14:39:59 ftl -- common/autotest_common.sh@955 -- # uname 00:32:17.326 14:39:59 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:17.326 14:39:59 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95532 00:32:17.326 killing process with pid 95532 00:32:17.326 14:39:59 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:17.326 14:39:59 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:17.326 14:39:59 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95532' 00:32:17.326 14:39:59 ftl -- common/autotest_common.sh@969 -- # kill 95532 00:32:17.326 14:39:59 ftl -- common/autotest_common.sh@974 -- # wait 95532 00:32:17.584 14:39:59 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:17.843 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:17.843 Waiting for block devices as requested 00:32:17.843 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:18.102 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:18.102 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:18.102 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:23.381 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:23.381 Remove shared memory files 00:32:23.381 14:40:04 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:23.381 14:40:04 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:23.381 14:40:04 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:23.381 14:40:04 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:23.381 14:40:04 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:23.381 14:40:04 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:23.381 14:40:04 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:23.381 ************************************ 00:32:23.381 END TEST ftl 00:32:23.381 ************************************ 00:32:23.381 00:32:23.381 real 17m3.778s 00:32:23.381 user 18m45.687s 00:32:23.381 sys 1m18.896s 00:32:23.381 14:40:04 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:23.381 14:40:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:23.381 14:40:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:23.381 14:40:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:23.381 14:40:04 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:23.381 14:40:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:23.381 14:40:04 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:32:23.381 14:40:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:23.381 14:40:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:23.381 14:40:04 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:32:23.381 14:40:04 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:32:23.381 14:40:04 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:32:23.381 14:40:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:23.381 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:32:23.381 14:40:04 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:32:23.381 14:40:04 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:23.381 14:40:04 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:23.381 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:32:24.391 INFO: APP EXITING 00:32:24.391 INFO: killing all VMs 00:32:24.391 INFO: killing vhost app 00:32:24.391 INFO: EXIT DONE 00:32:24.651 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:25.228 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:25.228 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:25.228 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:25.228 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:25.490 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:26.065 Cleaning 00:32:26.065 Removing: /var/run/dpdk/spdk0/config 00:32:26.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:26.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:26.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:26.065 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:26.065 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:26.065 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:26.065 Removing: /var/run/dpdk/spdk0 00:32:26.065 Removing: /var/run/dpdk/spdk_pid69647 00:32:26.065 Removing: /var/run/dpdk/spdk_pid69810 00:32:26.065 Removing: /var/run/dpdk/spdk_pid70006 00:32:26.065 Removing: /var/run/dpdk/spdk_pid70088 00:32:26.065 Removing: /var/run/dpdk/spdk_pid70117 00:32:26.065 Removing: /var/run/dpdk/spdk_pid70228 00:32:26.065 Removing: /var/run/dpdk/spdk_pid70246 00:32:26.065 Removing: /var/run/dpdk/spdk_pid70423 00:32:26.065 Removing: /var/run/dpdk/spdk_pid70497 00:32:26.065 Removing: /var/run/dpdk/spdk_pid70582 00:32:26.065 Removing: /var/run/dpdk/spdk_pid70676 00:32:26.065 Removing: /var/run/dpdk/spdk_pid70757 00:32:26.065 Removing: /var/run/dpdk/spdk_pid70795 00:32:26.065 Removing: /var/run/dpdk/spdk_pid70827 00:32:26.065 Removing: /var/run/dpdk/spdk_pid70898 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71004 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71423 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71471 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71512 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71528 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71586 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71602 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71660 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71676 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71728 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71736 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71778 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71796 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71923 00:32:26.065 Removing: /var/run/dpdk/spdk_pid71965 00:32:26.065 Removing: /var/run/dpdk/spdk_pid72043 00:32:26.065 Removing: /var/run/dpdk/spdk_pid72204 00:32:26.065 Removing: /var/run/dpdk/spdk_pid72277 00:32:26.065 Removing: /var/run/dpdk/spdk_pid72308 00:32:26.065 Removing: /var/run/dpdk/spdk_pid72731 00:32:26.065 Removing: /var/run/dpdk/spdk_pid72818 00:32:26.065 Removing: /var/run/dpdk/spdk_pid72918 00:32:26.065 Removing: /var/run/dpdk/spdk_pid72959 00:32:26.065 Removing: /var/run/dpdk/spdk_pid72981 00:32:26.065 Removing: /var/run/dpdk/spdk_pid73054 00:32:26.065 Removing: /var/run/dpdk/spdk_pid73670 00:32:26.065 Removing: /var/run/dpdk/spdk_pid73701 00:32:26.065 Removing: /var/run/dpdk/spdk_pid74151 00:32:26.065 Removing: /var/run/dpdk/spdk_pid74239 00:32:26.065 Removing: /var/run/dpdk/spdk_pid74352 00:32:26.065 Removing: /var/run/dpdk/spdk_pid74394 00:32:26.065 Removing: /var/run/dpdk/spdk_pid74414 00:32:26.065 Removing: /var/run/dpdk/spdk_pid74434 00:32:26.065 Removing: /var/run/dpdk/spdk_pid76257 00:32:26.065 Removing: /var/run/dpdk/spdk_pid76372 00:32:26.065 Removing: /var/run/dpdk/spdk_pid76381 00:32:26.065 Removing: /var/run/dpdk/spdk_pid76399 00:32:26.065 Removing: /var/run/dpdk/spdk_pid76438 00:32:26.065 Removing: /var/run/dpdk/spdk_pid76442 00:32:26.065 Removing: /var/run/dpdk/spdk_pid76454 00:32:26.065 Removing: /var/run/dpdk/spdk_pid76494 00:32:26.065 Removing: /var/run/dpdk/spdk_pid76498 00:32:26.065 Removing: /var/run/dpdk/spdk_pid76510 00:32:26.065 Removing: /var/run/dpdk/spdk_pid76555 00:32:26.065 Removing: /var/run/dpdk/spdk_pid76559 00:32:26.065 Removing: /var/run/dpdk/spdk_pid76571 00:32:26.065 Removing: /var/run/dpdk/spdk_pid77945 00:32:26.065 Removing: /var/run/dpdk/spdk_pid78032 00:32:26.065 Removing: /var/run/dpdk/spdk_pid79431 00:32:26.065 Removing: /var/run/dpdk/spdk_pid80794 00:32:26.065 Removing: /var/run/dpdk/spdk_pid80854 00:32:26.065 Removing: /var/run/dpdk/spdk_pid80908 00:32:26.065 Removing: /var/run/dpdk/spdk_pid80963 00:32:26.065 Removing: /var/run/dpdk/spdk_pid81042 00:32:26.065 Removing: /var/run/dpdk/spdk_pid81106 00:32:26.065 Removing: /var/run/dpdk/spdk_pid81245 00:32:26.065 Removing: /var/run/dpdk/spdk_pid81595 00:32:26.065 Removing: /var/run/dpdk/spdk_pid81619 00:32:26.065 Removing: /var/run/dpdk/spdk_pid82051 00:32:26.065 Removing: /var/run/dpdk/spdk_pid82226 00:32:26.065 Removing: /var/run/dpdk/spdk_pid82316 00:32:26.065 Removing: /var/run/dpdk/spdk_pid82414 00:32:26.065 Removing: /var/run/dpdk/spdk_pid82455 00:32:26.065 Removing: /var/run/dpdk/spdk_pid82486 00:32:26.065 Removing: /var/run/dpdk/spdk_pid82773 00:32:26.065 Removing: /var/run/dpdk/spdk_pid82811 00:32:26.065 Removing: /var/run/dpdk/spdk_pid82861 00:32:26.065 Removing: /var/run/dpdk/spdk_pid83223 00:32:26.065 Removing: /var/run/dpdk/spdk_pid83367 00:32:26.065 Removing: /var/run/dpdk/spdk_pid84166 00:32:26.065 Removing: /var/run/dpdk/spdk_pid84281 00:32:26.065 Removing: /var/run/dpdk/spdk_pid84442 00:32:26.065 Removing: /var/run/dpdk/spdk_pid84534 00:32:26.065 Removing: /var/run/dpdk/spdk_pid84825 00:32:26.065 Removing: /var/run/dpdk/spdk_pid85078 00:32:26.065 Removing: /var/run/dpdk/spdk_pid85424 00:32:26.065 Removing: /var/run/dpdk/spdk_pid85584 00:32:26.065 Removing: /var/run/dpdk/spdk_pid85703 00:32:26.065 Removing: /var/run/dpdk/spdk_pid85750 00:32:26.065 Removing: /var/run/dpdk/spdk_pid85915 00:32:26.065 Removing: /var/run/dpdk/spdk_pid85933 00:32:26.065 Removing: /var/run/dpdk/spdk_pid85972 00:32:26.065 Removing: /var/run/dpdk/spdk_pid86241 00:32:26.065 Removing: /var/run/dpdk/spdk_pid86471 00:32:26.065 Removing: /var/run/dpdk/spdk_pid87016 00:32:26.065 Removing: /var/run/dpdk/spdk_pid87931 00:32:26.065 Removing: /var/run/dpdk/spdk_pid88583 00:32:26.065 Removing: /var/run/dpdk/spdk_pid89359 00:32:26.065 Removing: /var/run/dpdk/spdk_pid89507 00:32:26.065 Removing: /var/run/dpdk/spdk_pid89585 00:32:26.065 Removing: /var/run/dpdk/spdk_pid89941 00:32:26.065 Removing: /var/run/dpdk/spdk_pid89994 00:32:26.065 Removing: /var/run/dpdk/spdk_pid90950 00:32:26.065 Removing: /var/run/dpdk/spdk_pid91490 00:32:26.065 Removing: /var/run/dpdk/spdk_pid92192 00:32:26.065 Removing: /var/run/dpdk/spdk_pid92325 00:32:26.327 Removing: /var/run/dpdk/spdk_pid92360 00:32:26.327 Removing: /var/run/dpdk/spdk_pid92414 00:32:26.327 Removing: /var/run/dpdk/spdk_pid92465 00:32:26.327 Removing: /var/run/dpdk/spdk_pid92518 00:32:26.327 Removing: /var/run/dpdk/spdk_pid92698 00:32:26.327 Removing: /var/run/dpdk/spdk_pid92779 00:32:26.327 Removing: /var/run/dpdk/spdk_pid92835 00:32:26.327 Removing: /var/run/dpdk/spdk_pid92924 00:32:26.327 Removing: /var/run/dpdk/spdk_pid92963 00:32:26.327 Removing: /var/run/dpdk/spdk_pid93031 00:32:26.327 Removing: /var/run/dpdk/spdk_pid93163 00:32:26.327 Removing: /var/run/dpdk/spdk_pid93376 00:32:26.327 Removing: /var/run/dpdk/spdk_pid94002 00:32:26.327 Removing: /var/run/dpdk/spdk_pid94815 00:32:26.327 Removing: /var/run/dpdk/spdk_pid95127 00:32:26.327 Removing: /var/run/dpdk/spdk_pid95532 00:32:26.327 Clean 00:32:26.327 14:40:07 -- common/autotest_common.sh@1451 -- # return 0 00:32:26.327 14:40:07 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:32:26.327 14:40:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:26.327 14:40:07 -- common/autotest_common.sh@10 -- # set +x 00:32:26.327 14:40:07 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:32:26.327 14:40:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:26.327 14:40:07 -- common/autotest_common.sh@10 -- # set +x 00:32:26.327 14:40:08 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:26.327 14:40:08 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:26.327 14:40:08 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:26.327 14:40:08 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:32:26.327 14:40:08 -- spdk/autotest.sh@394 -- # hostname 00:32:26.327 14:40:08 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:26.589 geninfo: WARNING: invalid characters removed from testname! 00:32:53.153 14:40:32 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:54.538 14:40:35 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:55.957 14:40:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:58.495 14:40:40 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:00.401 14:40:41 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:02.303 14:40:43 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:03.685 14:40:45 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:03.685 14:40:45 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:33:03.685 14:40:45 -- common/autotest_common.sh@1681 -- $ lcov --version 00:33:03.685 14:40:45 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:33:03.946 14:40:45 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:33:03.946 14:40:45 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:33:03.946 14:40:45 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:33:03.946 14:40:45 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:33:03.946 14:40:45 -- scripts/common.sh@336 -- $ IFS=.-: 00:33:03.946 14:40:45 -- scripts/common.sh@336 -- $ read -ra ver1 00:33:03.946 14:40:45 -- scripts/common.sh@337 -- $ IFS=.-: 00:33:03.946 14:40:45 -- scripts/common.sh@337 -- $ read -ra ver2 00:33:03.946 14:40:45 -- scripts/common.sh@338 -- $ local 'op=<' 00:33:03.946 14:40:45 -- scripts/common.sh@340 -- $ ver1_l=2 00:33:03.946 14:40:45 -- scripts/common.sh@341 -- $ ver2_l=1 00:33:03.946 14:40:45 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:33:03.946 14:40:45 -- scripts/common.sh@344 -- $ case "$op" in 00:33:03.946 14:40:45 -- scripts/common.sh@345 -- $ : 1 00:33:03.946 14:40:45 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:33:03.946 14:40:45 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:03.946 14:40:45 -- scripts/common.sh@365 -- $ decimal 1 00:33:03.946 14:40:45 -- scripts/common.sh@353 -- $ local d=1 00:33:03.946 14:40:45 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:33:03.946 14:40:45 -- scripts/common.sh@355 -- $ echo 1 00:33:03.946 14:40:45 -- scripts/common.sh@365 -- $ ver1[v]=1 00:33:03.946 14:40:45 -- scripts/common.sh@366 -- $ decimal 2 00:33:03.946 14:40:45 -- scripts/common.sh@353 -- $ local d=2 00:33:03.946 14:40:45 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:33:03.946 14:40:45 -- scripts/common.sh@355 -- $ echo 2 00:33:03.946 14:40:45 -- scripts/common.sh@366 -- $ ver2[v]=2 00:33:03.946 14:40:45 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:33:03.946 14:40:45 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:33:03.946 14:40:45 -- scripts/common.sh@368 -- $ return 0 00:33:03.946 14:40:45 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:03.946 14:40:45 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:33:03.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.946 --rc genhtml_branch_coverage=1 00:33:03.946 --rc genhtml_function_coverage=1 00:33:03.946 --rc genhtml_legend=1 00:33:03.946 --rc geninfo_all_blocks=1 00:33:03.946 --rc geninfo_unexecuted_blocks=1 00:33:03.946 00:33:03.946 ' 00:33:03.946 14:40:45 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:33:03.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.946 --rc genhtml_branch_coverage=1 00:33:03.946 --rc genhtml_function_coverage=1 00:33:03.946 --rc genhtml_legend=1 00:33:03.946 --rc geninfo_all_blocks=1 00:33:03.946 --rc geninfo_unexecuted_blocks=1 00:33:03.946 00:33:03.946 ' 00:33:03.946 14:40:45 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:33:03.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.946 --rc genhtml_branch_coverage=1 00:33:03.946 --rc genhtml_function_coverage=1 00:33:03.946 --rc genhtml_legend=1 00:33:03.946 --rc geninfo_all_blocks=1 00:33:03.946 --rc geninfo_unexecuted_blocks=1 00:33:03.946 00:33:03.946 ' 00:33:03.946 14:40:45 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:33:03.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:03.946 --rc genhtml_branch_coverage=1 00:33:03.946 --rc genhtml_function_coverage=1 00:33:03.946 --rc genhtml_legend=1 00:33:03.946 --rc geninfo_all_blocks=1 00:33:03.946 --rc geninfo_unexecuted_blocks=1 00:33:03.946 00:33:03.946 ' 00:33:03.946 14:40:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:03.946 14:40:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:33:03.946 14:40:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:03.946 14:40:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.946 14:40:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.946 14:40:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.946 14:40:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.947 14:40:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.947 14:40:45 -- paths/export.sh@5 -- $ export PATH 00:33:03.947 14:40:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.947 14:40:45 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:03.947 14:40:45 -- common/autobuild_common.sh@479 -- $ date +%s 00:33:03.947 14:40:45 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732891245.XXXXXX 00:33:03.947 14:40:45 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732891245.75ZdBI 00:33:03.947 14:40:45 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:33:03.947 14:40:45 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:33:03.947 14:40:45 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:33:03.947 14:40:45 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:33:03.947 14:40:45 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:03.947 14:40:45 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:03.947 14:40:45 -- common/autobuild_common.sh@495 -- $ get_config_params 00:33:03.947 14:40:45 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:33:03.947 14:40:45 -- common/autotest_common.sh@10 -- $ set +x 00:33:03.947 14:40:45 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:33:03.947 14:40:45 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:33:03.947 14:40:45 -- pm/common@17 -- $ local monitor 00:33:03.947 14:40:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:03.947 14:40:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:03.947 14:40:45 -- pm/common@25 -- $ sleep 1 00:33:03.947 14:40:45 -- pm/common@21 -- $ date +%s 00:33:03.947 14:40:45 -- pm/common@21 -- $ date +%s 00:33:03.947 14:40:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732891245 00:33:03.947 14:40:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732891245 00:33:03.947 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732891245_collect-vmstat.pm.log 00:33:03.947 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732891245_collect-cpu-load.pm.log 00:33:04.890 14:40:46 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:33:04.890 14:40:46 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:33:04.890 14:40:46 -- spdk/autopackage.sh@14 -- $ timing_finish 00:33:04.890 14:40:46 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:04.890 14:40:46 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:04.890 14:40:46 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:04.890 14:40:46 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:04.890 14:40:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:04.890 14:40:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:04.890 14:40:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:04.890 14:40:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:04.890 14:40:46 -- pm/common@44 -- $ pid=97235 00:33:04.890 14:40:46 -- pm/common@50 -- $ kill -TERM 97235 00:33:04.890 14:40:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:04.890 14:40:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:04.890 14:40:46 -- pm/common@44 -- $ pid=97236 00:33:04.890 14:40:46 -- pm/common@50 -- $ kill -TERM 97236 00:33:04.890 + [[ -n 5764 ]] 00:33:04.890 + sudo kill 5764 00:33:04.901 [Pipeline] } 00:33:04.919 [Pipeline] // timeout 00:33:04.926 [Pipeline] } 00:33:04.942 [Pipeline] // stage 00:33:04.949 [Pipeline] } 00:33:04.965 [Pipeline] // catchError 00:33:04.976 [Pipeline] stage 00:33:04.979 [Pipeline] { (Stop VM) 00:33:04.994 [Pipeline] sh 00:33:05.284 + vagrant halt 00:33:07.829 ==> default: Halting domain... 00:33:13.236 [Pipeline] sh 00:33:13.523 + vagrant destroy -f 00:33:16.077 ==> default: Removing domain... 00:33:16.665 [Pipeline] sh 00:33:16.969 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:16.980 [Pipeline] } 00:33:16.995 [Pipeline] // stage 00:33:17.000 [Pipeline] } 00:33:17.014 [Pipeline] // dir 00:33:17.020 [Pipeline] } 00:33:17.034 [Pipeline] // wrap 00:33:17.041 [Pipeline] } 00:33:17.053 [Pipeline] // catchError 00:33:17.063 [Pipeline] stage 00:33:17.066 [Pipeline] { (Epilogue) 00:33:17.079 [Pipeline] sh 00:33:17.367 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:23.980 [Pipeline] catchError 00:33:23.982 [Pipeline] { 00:33:23.993 [Pipeline] sh 00:33:24.269 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:24.269 Artifacts sizes are good 00:33:24.277 [Pipeline] } 00:33:24.292 [Pipeline] // catchError 00:33:24.304 [Pipeline] archiveArtifacts 00:33:24.311 Archiving artifacts 00:33:24.417 [Pipeline] cleanWs 00:33:24.429 [WS-CLEANUP] Deleting project workspace... 00:33:24.429 [WS-CLEANUP] Deferred wipeout is used... 00:33:24.435 [WS-CLEANUP] done 00:33:24.437 [Pipeline] } 00:33:24.455 [Pipeline] // stage 00:33:24.461 [Pipeline] } 00:33:24.476 [Pipeline] // node 00:33:24.483 [Pipeline] End of Pipeline 00:33:24.524 Finished: SUCCESS