00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 173 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3674 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.087 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.120 Fetching changes from the remote Git repository 00:00:00.122 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.153 Using shallow fetch with depth 1 00:00:00.153 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.153 > git --version # timeout=10 00:00:00.192 > git --version # 'git version 2.39.2' 00:00:00.192 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.227 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.227 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.201 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.211 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.221 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.221 > git config core.sparsecheckout # timeout=10 00:00:05.232 > git read-tree -mu HEAD # timeout=10 00:00:05.247 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.268 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.268 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.378 [Pipeline] Start of Pipeline 00:00:05.394 [Pipeline] library 00:00:05.396 Loading library shm_lib@master 00:00:05.397 Library shm_lib@master is cached. Copying from home. 00:00:05.414 [Pipeline] node 00:00:05.427 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.429 [Pipeline] { 00:00:05.441 [Pipeline] catchError 00:00:05.442 [Pipeline] { 00:00:05.456 [Pipeline] wrap 00:00:05.465 [Pipeline] { 00:00:05.474 [Pipeline] stage 00:00:05.476 [Pipeline] { (Prologue) 00:00:05.495 [Pipeline] echo 00:00:05.497 Node: VM-host-SM38 00:00:05.504 [Pipeline] cleanWs 00:00:05.516 [WS-CLEANUP] Deleting project workspace... 00:00:05.516 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.523 [WS-CLEANUP] done 00:00:05.745 [Pipeline] setCustomBuildProperty 00:00:05.846 [Pipeline] httpRequest 00:00:06.561 [Pipeline] echo 00:00:06.563 Sorcerer 10.211.164.20 is alive 00:00:06.572 [Pipeline] retry 00:00:06.574 [Pipeline] { 00:00:06.587 [Pipeline] httpRequest 00:00:06.591 HttpMethod: GET 00:00:06.592 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.592 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.603 Response Code: HTTP/1.1 200 OK 00:00:06.604 Success: Status code 200 is in the accepted range: 200,404 00:00:06.604 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.730 [Pipeline] } 00:00:07.749 [Pipeline] // retry 00:00:07.756 [Pipeline] sh 00:00:08.043 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.062 [Pipeline] httpRequest 00:00:08.414 [Pipeline] echo 00:00:08.416 Sorcerer 10.211.164.20 is alive 00:00:08.424 [Pipeline] retry 00:00:08.425 [Pipeline] { 00:00:08.435 [Pipeline] httpRequest 00:00:08.439 HttpMethod: GET 00:00:08.439 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:08.440 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:08.460 Response Code: HTTP/1.1 200 OK 00:00:08.460 Success: Status code 200 is in the accepted range: 200,404 00:00:08.460 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:32.626 [Pipeline] } 00:00:32.645 [Pipeline] // retry 00:00:32.654 [Pipeline] sh 00:00:32.941 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:36.262 [Pipeline] sh 00:00:36.548 + git -C spdk log --oneline -n5 00:00:36.548 b18e1bd62 version: v24.09.1-pre 00:00:36.548 19524ad45 version: v24.09 00:00:36.548 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:36.548 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:36.548 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:36.568 [Pipeline] withCredentials 00:00:36.581 > git --version # timeout=10 00:00:36.595 > git --version # 'git version 2.39.2' 00:00:36.616 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:36.618 [Pipeline] { 00:00:36.628 [Pipeline] retry 00:00:36.630 [Pipeline] { 00:00:36.645 [Pipeline] sh 00:00:36.930 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:36.944 [Pipeline] } 00:00:36.963 [Pipeline] // retry 00:00:36.969 [Pipeline] } 00:00:36.985 [Pipeline] // withCredentials 00:00:36.994 [Pipeline] httpRequest 00:00:37.539 [Pipeline] echo 00:00:37.541 Sorcerer 10.211.164.20 is alive 00:00:37.550 [Pipeline] retry 00:00:37.552 [Pipeline] { 00:00:37.566 [Pipeline] httpRequest 00:00:37.572 HttpMethod: GET 00:00:37.572 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:37.573 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:37.579 Response Code: HTTP/1.1 200 OK 00:00:37.579 Success: Status code 200 is in the accepted range: 200,404 00:00:37.580 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:32.215 [Pipeline] } 00:01:32.232 [Pipeline] // retry 00:01:32.240 [Pipeline] sh 00:01:32.527 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:34.551 [Pipeline] sh 00:01:34.833 + git -C dpdk log --oneline -n5 00:01:34.834 eeb0605f11 version: 23.11.0 00:01:34.834 238778122a doc: update release notes for 23.11 00:01:34.834 46aa6b3cfc doc: fix description of RSS features 00:01:34.834 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:34.834 7e421ae345 devtools: support skipping forbid rule check 00:01:34.852 [Pipeline] writeFile 00:01:34.869 [Pipeline] sh 00:01:35.148 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:35.159 [Pipeline] sh 00:01:35.437 + cat autorun-spdk.conf 00:01:35.437 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.437 SPDK_TEST_NVME=1 00:01:35.437 SPDK_TEST_FTL=1 00:01:35.437 SPDK_TEST_ISAL=1 00:01:35.437 SPDK_RUN_ASAN=1 00:01:35.437 SPDK_RUN_UBSAN=1 00:01:35.437 SPDK_TEST_XNVME=1 00:01:35.437 SPDK_TEST_NVME_FDP=1 00:01:35.437 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:35.437 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:35.437 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.443 RUN_NIGHTLY=1 00:01:35.445 [Pipeline] } 00:01:35.459 [Pipeline] // stage 00:01:35.474 [Pipeline] stage 00:01:35.476 [Pipeline] { (Run VM) 00:01:35.489 [Pipeline] sh 00:01:35.766 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:35.766 + echo 'Start stage prepare_nvme.sh' 00:01:35.766 Start stage prepare_nvme.sh 00:01:35.766 + [[ -n 3 ]] 00:01:35.766 + disk_prefix=ex3 00:01:35.766 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:35.766 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:35.766 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:35.766 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.766 ++ SPDK_TEST_NVME=1 00:01:35.766 ++ SPDK_TEST_FTL=1 00:01:35.766 ++ SPDK_TEST_ISAL=1 00:01:35.766 ++ SPDK_RUN_ASAN=1 00:01:35.766 ++ SPDK_RUN_UBSAN=1 00:01:35.766 ++ SPDK_TEST_XNVME=1 00:01:35.766 ++ SPDK_TEST_NVME_FDP=1 00:01:35.766 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:35.766 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:35.766 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.766 ++ RUN_NIGHTLY=1 00:01:35.766 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:35.766 + nvme_files=() 00:01:35.766 + declare -A nvme_files 00:01:35.766 + backend_dir=/var/lib/libvirt/images/backends 00:01:35.766 + nvme_files['nvme.img']=5G 00:01:35.766 + nvme_files['nvme-cmb.img']=5G 00:01:35.766 + nvme_files['nvme-multi0.img']=4G 00:01:35.766 + nvme_files['nvme-multi1.img']=4G 00:01:35.766 + nvme_files['nvme-multi2.img']=4G 00:01:35.766 + nvme_files['nvme-openstack.img']=8G 00:01:35.766 + nvme_files['nvme-zns.img']=5G 00:01:35.766 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:35.766 + (( SPDK_TEST_FTL == 1 )) 00:01:35.766 + nvme_files["nvme-ftl.img"]=6G 00:01:35.766 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:35.766 + nvme_files["nvme-fdp.img"]=1G 00:01:35.766 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:35.766 + for nvme in "${!nvme_files[@]}" 00:01:35.766 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:35.766 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:35.766 + for nvme in "${!nvme_files[@]}" 00:01:35.766 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:01:36.332 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:36.332 + for nvme in "${!nvme_files[@]}" 00:01:36.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:36.332 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.332 + for nvme in "${!nvme_files[@]}" 00:01:36.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:36.332 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:36.332 + for nvme in "${!nvme_files[@]}" 00:01:36.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:36.590 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.590 + for nvme in "${!nvme_files[@]}" 00:01:36.590 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:36.590 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.590 + for nvme in "${!nvme_files[@]}" 00:01:36.590 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:36.590 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.590 + for nvme in "${!nvme_files[@]}" 00:01:36.590 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:01:36.848 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:36.848 + for nvme in "${!nvme_files[@]}" 00:01:36.848 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:37.108 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.108 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:37.108 + echo 'End stage prepare_nvme.sh' 00:01:37.108 End stage prepare_nvme.sh 00:01:37.120 [Pipeline] sh 00:01:37.402 + DISTRO=fedora39 00:01:37.402 + CPUS=10 00:01:37.402 + RAM=12288 00:01:37.402 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:37.402 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:37.402 00:01:37.402 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:37.402 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:37.402 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:37.402 HELP=0 00:01:37.402 DRY_RUN=0 00:01:37.402 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:01:37.402 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:37.402 NVME_AUTO_CREATE=0 00:01:37.402 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:01:37.402 NVME_CMB=,,,, 00:01:37.402 NVME_PMR=,,,, 00:01:37.402 NVME_ZNS=,,,, 00:01:37.402 NVME_MS=true,,,, 00:01:37.402 NVME_FDP=,,,on, 00:01:37.402 SPDK_VAGRANT_DISTRO=fedora39 00:01:37.402 SPDK_VAGRANT_VMCPU=10 00:01:37.402 SPDK_VAGRANT_VMRAM=12288 00:01:37.402 SPDK_VAGRANT_PROVIDER=libvirt 00:01:37.402 SPDK_VAGRANT_HTTP_PROXY= 00:01:37.402 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:37.402 SPDK_OPENSTACK_NETWORK=0 00:01:37.402 VAGRANT_PACKAGE_BOX=0 00:01:37.402 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:37.402 FORCE_DISTRO=true 00:01:37.402 VAGRANT_BOX_VERSION= 00:01:37.402 EXTRA_VAGRANTFILES= 00:01:37.402 NIC_MODEL=e1000 00:01:37.402 00:01:37.402 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:37.402 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:39.944 Bringing machine 'default' up with 'libvirt' provider... 00:01:40.518 ==> default: Creating image (snapshot of base box volume). 00:01:40.779 ==> default: Creating domain with the following settings... 00:01:40.779 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732783534_9432d56152641d1b8c84 00:01:40.779 ==> default: -- Domain type: kvm 00:01:40.779 ==> default: -- Cpus: 10 00:01:40.779 ==> default: -- Feature: acpi 00:01:40.779 ==> default: -- Feature: apic 00:01:40.779 ==> default: -- Feature: pae 00:01:40.779 ==> default: -- Memory: 12288M 00:01:40.779 ==> default: -- Memory Backing: hugepages: 00:01:40.779 ==> default: -- Management MAC: 00:01:40.779 ==> default: -- Loader: 00:01:40.779 ==> default: -- Nvram: 00:01:40.779 ==> default: -- Base box: spdk/fedora39 00:01:40.779 ==> default: -- Storage pool: default 00:01:40.779 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732783534_9432d56152641d1b8c84.img (20G) 00:01:40.779 ==> default: -- Volume Cache: default 00:01:40.779 ==> default: -- Kernel: 00:01:40.779 ==> default: -- Initrd: 00:01:40.779 ==> default: -- Graphics Type: vnc 00:01:40.779 ==> default: -- Graphics Port: -1 00:01:40.779 ==> default: -- Graphics IP: 127.0.0.1 00:01:40.779 ==> default: -- Graphics Password: Not defined 00:01:40.779 ==> default: -- Video Type: cirrus 00:01:40.779 ==> default: -- Video VRAM: 9216 00:01:40.779 ==> default: -- Sound Type: 00:01:40.779 ==> default: -- Keymap: en-us 00:01:40.779 ==> default: -- TPM Path: 00:01:40.779 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:40.779 ==> default: -- Command line args: 00:01:40.779 ==> default: -> value=-device, 00:01:40.779 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:40.779 ==> default: -> value=-drive, 00:01:40.779 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:40.779 ==> default: -> value=-device, 00:01:40.779 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:40.779 ==> default: -> value=-device, 00:01:40.779 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:40.779 ==> default: -> value=-drive, 00:01:40.779 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:01:40.779 ==> default: -> value=-device, 00:01:40.779 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.779 ==> default: -> value=-device, 00:01:40.779 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:40.779 ==> default: -> value=-drive, 00:01:40.779 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:40.779 ==> default: -> value=-device, 00:01:40.779 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.779 ==> default: -> value=-drive, 00:01:40.779 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:40.779 ==> default: -> value=-device, 00:01:40.779 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.779 ==> default: -> value=-drive, 00:01:40.779 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:40.779 ==> default: -> value=-device, 00:01:40.779 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.779 ==> default: -> value=-device, 00:01:40.779 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:40.779 ==> default: -> value=-device, 00:01:40.779 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:40.779 ==> default: -> value=-drive, 00:01:40.779 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:40.779 ==> default: -> value=-device, 00:01:40.779 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.779 ==> default: Creating shared folders metadata... 00:01:40.779 ==> default: Starting domain. 00:01:43.323 ==> default: Waiting for domain to get an IP address... 00:02:01.450 ==> default: Waiting for SSH to become available... 00:02:01.450 ==> default: Configuring and enabling network interfaces... 00:02:04.759 default: SSH address: 192.168.121.203:22 00:02:04.759 default: SSH username: vagrant 00:02:04.759 default: SSH auth method: private key 00:02:06.710 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:14.847 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:21.442 ==> default: Mounting SSHFS shared folder... 00:02:22.830 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:22.830 ==> default: Checking Mount.. 00:02:23.773 ==> default: Folder Successfully Mounted! 00:02:23.773 00:02:23.773 SUCCESS! 00:02:23.773 00:02:23.773 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:23.773 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:23.773 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:23.773 00:02:23.785 [Pipeline] } 00:02:23.800 [Pipeline] // stage 00:02:23.810 [Pipeline] dir 00:02:23.810 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:23.812 [Pipeline] { 00:02:23.828 [Pipeline] catchError 00:02:23.830 [Pipeline] { 00:02:23.846 [Pipeline] sh 00:02:24.130 + vagrant ssh-config --host vagrant 00:02:24.130 + sed -ne '/^Host/,$p' 00:02:24.130 + tee ssh_conf 00:02:27.434 Host vagrant 00:02:27.434 HostName 192.168.121.203 00:02:27.434 User vagrant 00:02:27.434 Port 22 00:02:27.434 UserKnownHostsFile /dev/null 00:02:27.434 StrictHostKeyChecking no 00:02:27.434 PasswordAuthentication no 00:02:27.434 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:27.434 IdentitiesOnly yes 00:02:27.434 LogLevel FATAL 00:02:27.434 ForwardAgent yes 00:02:27.434 ForwardX11 yes 00:02:27.434 00:02:27.447 [Pipeline] withEnv 00:02:27.449 [Pipeline] { 00:02:27.461 [Pipeline] sh 00:02:27.743 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:27.743 source /etc/os-release 00:02:27.743 [[ -e /image.version ]] && img=$(< /image.version) 00:02:27.743 # Minimal, systemd-like check. 00:02:27.743 if [[ -e /.dockerenv ]]; then 00:02:27.743 # Clear garbage from the node'\''s name: 00:02:27.743 # agt-er_autotest_547-896 -> autotest_547-896 00:02:27.743 # $HOSTNAME is the actual container id 00:02:27.743 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:27.743 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:27.743 # We can assume this is a mount from a host where container is running, 00:02:27.743 # so fetch its hostname to easily identify the target swarm worker. 00:02:27.743 container="$(< /etc/hostname) ($agent)" 00:02:27.743 else 00:02:27.743 # Fallback 00:02:27.743 container=$agent 00:02:27.743 fi 00:02:27.743 fi 00:02:27.743 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:27.743 ' 00:02:28.016 [Pipeline] } 00:02:28.031 [Pipeline] // withEnv 00:02:28.041 [Pipeline] setCustomBuildProperty 00:02:28.055 [Pipeline] stage 00:02:28.057 [Pipeline] { (Tests) 00:02:28.073 [Pipeline] sh 00:02:28.356 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:28.633 [Pipeline] sh 00:02:28.918 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:29.198 [Pipeline] timeout 00:02:29.198 Timeout set to expire in 50 min 00:02:29.200 [Pipeline] { 00:02:29.215 [Pipeline] sh 00:02:29.505 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:30.079 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:30.095 [Pipeline] sh 00:02:30.384 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:30.661 [Pipeline] sh 00:02:30.947 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:31.224 [Pipeline] sh 00:02:31.508 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:31.770 ++ readlink -f spdk_repo 00:02:31.770 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:31.770 + [[ -n /home/vagrant/spdk_repo ]] 00:02:31.770 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:31.770 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:31.770 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:31.770 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:31.770 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:31.770 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:31.770 + cd /home/vagrant/spdk_repo 00:02:31.771 + source /etc/os-release 00:02:31.771 ++ NAME='Fedora Linux' 00:02:31.771 ++ VERSION='39 (Cloud Edition)' 00:02:31.771 ++ ID=fedora 00:02:31.771 ++ VERSION_ID=39 00:02:31.771 ++ VERSION_CODENAME= 00:02:31.771 ++ PLATFORM_ID=platform:f39 00:02:31.771 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:31.771 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:31.771 ++ LOGO=fedora-logo-icon 00:02:31.771 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:31.771 ++ HOME_URL=https://fedoraproject.org/ 00:02:31.771 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:31.771 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:31.771 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:31.771 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:31.771 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:31.771 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:31.771 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:31.771 ++ SUPPORT_END=2024-11-12 00:02:31.771 ++ VARIANT='Cloud Edition' 00:02:31.771 ++ VARIANT_ID=cloud 00:02:31.771 + uname -a 00:02:31.771 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:31.771 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:32.032 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:32.607 Hugepages 00:02:32.607 node hugesize free / total 00:02:32.607 node0 1048576kB 0 / 0 00:02:32.607 node0 2048kB 0 / 0 00:02:32.607 00:02:32.607 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:32.607 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:32.607 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:32.607 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:32.607 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:32.607 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:32.607 + rm -f /tmp/spdk-ld-path 00:02:32.607 + source autorun-spdk.conf 00:02:32.607 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:32.607 ++ SPDK_TEST_NVME=1 00:02:32.607 ++ SPDK_TEST_FTL=1 00:02:32.607 ++ SPDK_TEST_ISAL=1 00:02:32.607 ++ SPDK_RUN_ASAN=1 00:02:32.607 ++ SPDK_RUN_UBSAN=1 00:02:32.607 ++ SPDK_TEST_XNVME=1 00:02:32.607 ++ SPDK_TEST_NVME_FDP=1 00:02:32.607 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:32.607 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:32.607 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:32.607 ++ RUN_NIGHTLY=1 00:02:32.607 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:32.607 + [[ -n '' ]] 00:02:32.607 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:32.607 + for M in /var/spdk/build-*-manifest.txt 00:02:32.607 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:32.607 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:32.607 + for M in /var/spdk/build-*-manifest.txt 00:02:32.607 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:32.607 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:32.607 + for M in /var/spdk/build-*-manifest.txt 00:02:32.607 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:32.607 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:32.607 ++ uname 00:02:32.607 + [[ Linux == \L\i\n\u\x ]] 00:02:32.607 + sudo dmesg -T 00:02:32.607 + sudo dmesg --clear 00:02:32.607 + dmesg_pid=5769 00:02:32.607 + [[ Fedora Linux == FreeBSD ]] 00:02:32.607 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:32.607 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:32.607 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:32.607 + [[ -x /usr/src/fio-static/fio ]] 00:02:32.607 + sudo dmesg -Tw 00:02:32.607 + export FIO_BIN=/usr/src/fio-static/fio 00:02:32.607 + FIO_BIN=/usr/src/fio-static/fio 00:02:32.607 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:32.607 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:32.607 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:32.607 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:32.607 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:32.607 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:32.607 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:32.607 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:32.607 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:32.870 Test configuration: 00:02:32.870 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:32.870 SPDK_TEST_NVME=1 00:02:32.870 SPDK_TEST_FTL=1 00:02:32.870 SPDK_TEST_ISAL=1 00:02:32.870 SPDK_RUN_ASAN=1 00:02:32.870 SPDK_RUN_UBSAN=1 00:02:32.870 SPDK_TEST_XNVME=1 00:02:32.870 SPDK_TEST_NVME_FDP=1 00:02:32.870 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:32.870 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:32.870 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:32.870 RUN_NIGHTLY=1 08:46:26 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:32.870 08:46:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:32.870 08:46:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:32.870 08:46:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:32.870 08:46:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:32.870 08:46:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:32.870 08:46:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.870 08:46:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.870 08:46:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.870 08:46:26 -- paths/export.sh@5 -- $ export PATH 00:02:32.870 08:46:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.870 08:46:26 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:32.870 08:46:26 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:32.870 08:46:26 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732783586.XXXXXX 00:02:32.870 08:46:26 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732783586.axwdqf 00:02:32.870 08:46:26 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:32.870 08:46:26 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:32.870 08:46:26 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:32.870 08:46:26 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:32.870 08:46:26 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:32.870 08:46:26 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:32.870 08:46:26 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:32.870 08:46:26 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:32.870 08:46:26 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.870 08:46:26 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:32.870 08:46:26 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:32.870 08:46:26 -- pm/common@17 -- $ local monitor 00:02:32.870 08:46:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.870 08:46:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.870 08:46:26 -- pm/common@25 -- $ sleep 1 00:02:32.870 08:46:26 -- pm/common@21 -- $ date +%s 00:02:32.870 08:46:26 -- pm/common@21 -- $ date +%s 00:02:32.870 08:46:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732783586 00:02:32.870 08:46:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732783586 00:02:32.870 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732783586_collect-cpu-load.pm.log 00:02:32.870 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732783586_collect-vmstat.pm.log 00:02:33.817 08:46:27 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:33.817 08:46:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:33.817 08:46:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:33.817 08:46:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:33.817 08:46:27 -- spdk/autobuild.sh@16 -- $ date -u 00:02:33.817 Thu Nov 28 08:46:27 AM UTC 2024 00:02:33.817 08:46:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:33.817 v24.09-1-gb18e1bd62 00:02:33.817 08:46:27 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:33.817 08:46:27 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:33.817 08:46:27 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:33.818 08:46:27 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:33.818 08:46:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.818 ************************************ 00:02:33.818 START TEST asan 00:02:33.818 ************************************ 00:02:33.818 using asan 00:02:33.818 08:46:27 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:33.818 00:02:33.818 real 0m0.000s 00:02:33.818 user 0m0.000s 00:02:33.818 sys 0m0.000s 00:02:33.818 ************************************ 00:02:33.818 END TEST asan 00:02:33.818 ************************************ 00:02:33.818 08:46:27 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:33.818 08:46:27 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:34.080 08:46:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:34.080 08:46:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:34.080 08:46:27 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:34.080 08:46:27 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:34.080 08:46:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.080 ************************************ 00:02:34.080 START TEST ubsan 00:02:34.080 ************************************ 00:02:34.080 using ubsan 00:02:34.080 08:46:27 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:34.080 00:02:34.080 real 0m0.000s 00:02:34.080 user 0m0.000s 00:02:34.080 sys 0m0.000s 00:02:34.080 08:46:27 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:34.080 08:46:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:34.080 ************************************ 00:02:34.080 END TEST ubsan 00:02:34.080 ************************************ 00:02:34.080 08:46:28 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:34.080 08:46:28 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:34.080 08:46:28 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:34.080 08:46:28 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:34.080 08:46:28 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:34.080 08:46:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.080 ************************************ 00:02:34.080 START TEST build_native_dpdk 00:02:34.080 ************************************ 00:02:34.080 08:46:28 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:34.080 eeb0605f11 version: 23.11.0 00:02:34.080 238778122a doc: update release notes for 23.11 00:02:34.080 46aa6b3cfc doc: fix description of RSS features 00:02:34.080 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:34.080 7e421ae345 devtools: support skipping forbid rule check 00:02:34.080 08:46:28 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:34.081 patching file config/rte_config.h 00:02:34.081 Hunk #1 succeeded at 60 (offset 1 line). 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:34.081 patching file lib/pcapng/rte_pcapng.c 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:34.081 08:46:28 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:34.081 08:46:28 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:39.378 The Meson build system 00:02:39.378 Version: 1.5.0 00:02:39.378 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:39.378 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:39.378 Build type: native build 00:02:39.378 Program cat found: YES (/usr/bin/cat) 00:02:39.378 Project name: DPDK 00:02:39.378 Project version: 23.11.0 00:02:39.378 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:39.378 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:39.378 Host machine cpu family: x86_64 00:02:39.378 Host machine cpu: x86_64 00:02:39.378 Message: ## Building in Developer Mode ## 00:02:39.378 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:39.378 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:39.378 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:39.378 Program python3 found: YES (/usr/bin/python3) 00:02:39.378 Program cat found: YES (/usr/bin/cat) 00:02:39.378 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:39.378 Compiler for C supports arguments -march=native: YES 00:02:39.378 Checking for size of "void *" : 8 00:02:39.378 Checking for size of "void *" : 8 (cached) 00:02:39.378 Library m found: YES 00:02:39.378 Library numa found: YES 00:02:39.378 Has header "numaif.h" : YES 00:02:39.378 Library fdt found: NO 00:02:39.378 Library execinfo found: NO 00:02:39.378 Has header "execinfo.h" : YES 00:02:39.378 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:39.378 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:39.378 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:39.378 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:39.378 Run-time dependency openssl found: YES 3.1.1 00:02:39.378 Run-time dependency libpcap found: YES 1.10.4 00:02:39.378 Has header "pcap.h" with dependency libpcap: YES 00:02:39.378 Compiler for C supports arguments -Wcast-qual: YES 00:02:39.378 Compiler for C supports arguments -Wdeprecated: YES 00:02:39.378 Compiler for C supports arguments -Wformat: YES 00:02:39.378 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:39.378 Compiler for C supports arguments -Wformat-security: NO 00:02:39.378 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:39.378 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:39.378 Compiler for C supports arguments -Wnested-externs: YES 00:02:39.378 Compiler for C supports arguments -Wold-style-definition: YES 00:02:39.378 Compiler for C supports arguments -Wpointer-arith: YES 00:02:39.378 Compiler for C supports arguments -Wsign-compare: YES 00:02:39.378 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:39.379 Compiler for C supports arguments -Wundef: YES 00:02:39.379 Compiler for C supports arguments -Wwrite-strings: YES 00:02:39.379 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:39.379 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:39.379 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:39.379 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:39.379 Program objdump found: YES (/usr/bin/objdump) 00:02:39.379 Compiler for C supports arguments -mavx512f: YES 00:02:39.379 Checking if "AVX512 checking" compiles: YES 00:02:39.379 Fetching value of define "__SSE4_2__" : 1 00:02:39.379 Fetching value of define "__AES__" : 1 00:02:39.379 Fetching value of define "__AVX__" : 1 00:02:39.379 Fetching value of define "__AVX2__" : 1 00:02:39.379 Fetching value of define "__AVX512BW__" : 1 00:02:39.379 Fetching value of define "__AVX512CD__" : 1 00:02:39.379 Fetching value of define "__AVX512DQ__" : 1 00:02:39.379 Fetching value of define "__AVX512F__" : 1 00:02:39.379 Fetching value of define "__AVX512VL__" : 1 00:02:39.379 Fetching value of define "__PCLMUL__" : 1 00:02:39.379 Fetching value of define "__RDRND__" : 1 00:02:39.379 Fetching value of define "__RDSEED__" : 1 00:02:39.379 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:39.379 Fetching value of define "__znver1__" : (undefined) 00:02:39.379 Fetching value of define "__znver2__" : (undefined) 00:02:39.379 Fetching value of define "__znver3__" : (undefined) 00:02:39.379 Fetching value of define "__znver4__" : (undefined) 00:02:39.379 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:39.379 Message: lib/log: Defining dependency "log" 00:02:39.379 Message: lib/kvargs: Defining dependency "kvargs" 00:02:39.379 Message: lib/telemetry: Defining dependency "telemetry" 00:02:39.379 Checking for function "getentropy" : NO 00:02:39.379 Message: lib/eal: Defining dependency "eal" 00:02:39.379 Message: lib/ring: Defining dependency "ring" 00:02:39.379 Message: lib/rcu: Defining dependency "rcu" 00:02:39.379 Message: lib/mempool: Defining dependency "mempool" 00:02:39.379 Message: lib/mbuf: Defining dependency "mbuf" 00:02:39.379 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:39.379 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:39.379 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:39.379 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:39.379 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:39.379 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:39.379 Compiler for C supports arguments -mpclmul: YES 00:02:39.379 Compiler for C supports arguments -maes: YES 00:02:39.379 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.379 Compiler for C supports arguments -mavx512bw: YES 00:02:39.379 Compiler for C supports arguments -mavx512dq: YES 00:02:39.379 Compiler for C supports arguments -mavx512vl: YES 00:02:39.379 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:39.379 Compiler for C supports arguments -mavx2: YES 00:02:39.379 Compiler for C supports arguments -mavx: YES 00:02:39.379 Message: lib/net: Defining dependency "net" 00:02:39.379 Message: lib/meter: Defining dependency "meter" 00:02:39.379 Message: lib/ethdev: Defining dependency "ethdev" 00:02:39.379 Message: lib/pci: Defining dependency "pci" 00:02:39.379 Message: lib/cmdline: Defining dependency "cmdline" 00:02:39.379 Message: lib/metrics: Defining dependency "metrics" 00:02:39.379 Message: lib/hash: Defining dependency "hash" 00:02:39.379 Message: lib/timer: Defining dependency "timer" 00:02:39.379 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:39.379 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:39.379 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:39.379 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:39.379 Message: lib/acl: Defining dependency "acl" 00:02:39.379 Message: lib/bbdev: Defining dependency "bbdev" 00:02:39.379 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:39.379 Run-time dependency libelf found: YES 0.191 00:02:39.379 Message: lib/bpf: Defining dependency "bpf" 00:02:39.379 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:39.379 Message: lib/compressdev: Defining dependency "compressdev" 00:02:39.379 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:39.379 Message: lib/distributor: Defining dependency "distributor" 00:02:39.379 Message: lib/dmadev: Defining dependency "dmadev" 00:02:39.379 Message: lib/efd: Defining dependency "efd" 00:02:39.379 Message: lib/eventdev: Defining dependency "eventdev" 00:02:39.379 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:39.379 Message: lib/gpudev: Defining dependency "gpudev" 00:02:39.379 Message: lib/gro: Defining dependency "gro" 00:02:39.379 Message: lib/gso: Defining dependency "gso" 00:02:39.379 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:39.379 Message: lib/jobstats: Defining dependency "jobstats" 00:02:39.379 Message: lib/latencystats: Defining dependency "latencystats" 00:02:39.379 Message: lib/lpm: Defining dependency "lpm" 00:02:39.379 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:39.379 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:39.379 Fetching value of define "__AVX512IFMA__" : 1 00:02:39.379 Message: lib/member: Defining dependency "member" 00:02:39.379 Message: lib/pcapng: Defining dependency "pcapng" 00:02:39.379 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:39.379 Message: lib/power: Defining dependency "power" 00:02:39.379 Message: lib/rawdev: Defining dependency "rawdev" 00:02:39.379 Message: lib/regexdev: Defining dependency "regexdev" 00:02:39.379 Message: lib/mldev: Defining dependency "mldev" 00:02:39.379 Message: lib/rib: Defining dependency "rib" 00:02:39.379 Message: lib/reorder: Defining dependency "reorder" 00:02:39.379 Message: lib/sched: Defining dependency "sched" 00:02:39.379 Message: lib/security: Defining dependency "security" 00:02:39.379 Message: lib/stack: Defining dependency "stack" 00:02:39.379 Has header "linux/userfaultfd.h" : YES 00:02:39.379 Has header "linux/vduse.h" : YES 00:02:39.379 Message: lib/vhost: Defining dependency "vhost" 00:02:39.379 Message: lib/ipsec: Defining dependency "ipsec" 00:02:39.379 Message: lib/pdcp: Defining dependency "pdcp" 00:02:39.379 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:39.379 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:39.379 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:39.379 Message: lib/fib: Defining dependency "fib" 00:02:39.379 Message: lib/port: Defining dependency "port" 00:02:39.379 Message: lib/pdump: Defining dependency "pdump" 00:02:39.379 Message: lib/table: Defining dependency "table" 00:02:39.379 Message: lib/pipeline: Defining dependency "pipeline" 00:02:39.379 Message: lib/graph: Defining dependency "graph" 00:02:39.379 Message: lib/node: Defining dependency "node" 00:02:39.379 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:39.379 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:39.379 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:39.379 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:40.765 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:40.765 Compiler for C supports arguments -Wno-unused-value: YES 00:02:40.765 Compiler for C supports arguments -Wno-format: YES 00:02:40.765 Compiler for C supports arguments -Wno-format-security: YES 00:02:40.765 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:40.765 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:40.765 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:40.765 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:40.765 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:40.765 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:40.765 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:40.765 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:40.765 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:40.765 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:40.765 Has header "sys/epoll.h" : YES 00:02:40.765 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:40.765 Configuring doxy-api-html.conf using configuration 00:02:40.765 Configuring doxy-api-man.conf using configuration 00:02:40.765 Program mandb found: YES (/usr/bin/mandb) 00:02:40.765 Program sphinx-build found: NO 00:02:40.765 Configuring rte_build_config.h using configuration 00:02:40.765 Message: 00:02:40.765 ================= 00:02:40.765 Applications Enabled 00:02:40.765 ================= 00:02:40.765 00:02:40.765 apps: 00:02:40.765 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:40.765 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:40.765 test-pmd, test-regex, test-sad, test-security-perf, 00:02:40.765 00:02:40.765 Message: 00:02:40.765 ================= 00:02:40.765 Libraries Enabled 00:02:40.765 ================= 00:02:40.765 00:02:40.765 libs: 00:02:40.765 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:40.765 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:40.765 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:40.765 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:40.765 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:40.765 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:40.765 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:40.765 00:02:40.765 00:02:40.765 Message: 00:02:40.765 =============== 00:02:40.765 Drivers Enabled 00:02:40.765 =============== 00:02:40.765 00:02:40.765 common: 00:02:40.765 00:02:40.765 bus: 00:02:40.765 pci, vdev, 00:02:40.765 mempool: 00:02:40.765 ring, 00:02:40.765 dma: 00:02:40.765 00:02:40.765 net: 00:02:40.765 i40e, 00:02:40.765 raw: 00:02:40.765 00:02:40.765 crypto: 00:02:40.765 00:02:40.765 compress: 00:02:40.765 00:02:40.765 regex: 00:02:40.765 00:02:40.765 ml: 00:02:40.765 00:02:40.765 vdpa: 00:02:40.765 00:02:40.765 event: 00:02:40.765 00:02:40.765 baseband: 00:02:40.765 00:02:40.765 gpu: 00:02:40.765 00:02:40.765 00:02:40.765 Message: 00:02:40.765 ================= 00:02:40.765 Content Skipped 00:02:40.765 ================= 00:02:40.765 00:02:40.765 apps: 00:02:40.765 00:02:40.765 libs: 00:02:40.765 00:02:40.765 drivers: 00:02:40.765 common/cpt: not in enabled drivers build config 00:02:40.765 common/dpaax: not in enabled drivers build config 00:02:40.765 common/iavf: not in enabled drivers build config 00:02:40.765 common/idpf: not in enabled drivers build config 00:02:40.765 common/mvep: not in enabled drivers build config 00:02:40.765 common/octeontx: not in enabled drivers build config 00:02:40.765 bus/auxiliary: not in enabled drivers build config 00:02:40.765 bus/cdx: not in enabled drivers build config 00:02:40.765 bus/dpaa: not in enabled drivers build config 00:02:40.765 bus/fslmc: not in enabled drivers build config 00:02:40.765 bus/ifpga: not in enabled drivers build config 00:02:40.765 bus/platform: not in enabled drivers build config 00:02:40.765 bus/vmbus: not in enabled drivers build config 00:02:40.765 common/cnxk: not in enabled drivers build config 00:02:40.765 common/mlx5: not in enabled drivers build config 00:02:40.765 common/nfp: not in enabled drivers build config 00:02:40.765 common/qat: not in enabled drivers build config 00:02:40.765 common/sfc_efx: not in enabled drivers build config 00:02:40.765 mempool/bucket: not in enabled drivers build config 00:02:40.765 mempool/cnxk: not in enabled drivers build config 00:02:40.765 mempool/dpaa: not in enabled drivers build config 00:02:40.765 mempool/dpaa2: not in enabled drivers build config 00:02:40.765 mempool/octeontx: not in enabled drivers build config 00:02:40.765 mempool/stack: not in enabled drivers build config 00:02:40.765 dma/cnxk: not in enabled drivers build config 00:02:40.765 dma/dpaa: not in enabled drivers build config 00:02:40.765 dma/dpaa2: not in enabled drivers build config 00:02:40.765 dma/hisilicon: not in enabled drivers build config 00:02:40.765 dma/idxd: not in enabled drivers build config 00:02:40.765 dma/ioat: not in enabled drivers build config 00:02:40.765 dma/skeleton: not in enabled drivers build config 00:02:40.765 net/af_packet: not in enabled drivers build config 00:02:40.765 net/af_xdp: not in enabled drivers build config 00:02:40.765 net/ark: not in enabled drivers build config 00:02:40.765 net/atlantic: not in enabled drivers build config 00:02:40.765 net/avp: not in enabled drivers build config 00:02:40.765 net/axgbe: not in enabled drivers build config 00:02:40.765 net/bnx2x: not in enabled drivers build config 00:02:40.765 net/bnxt: not in enabled drivers build config 00:02:40.765 net/bonding: not in enabled drivers build config 00:02:40.765 net/cnxk: not in enabled drivers build config 00:02:40.765 net/cpfl: not in enabled drivers build config 00:02:40.765 net/cxgbe: not in enabled drivers build config 00:02:40.765 net/dpaa: not in enabled drivers build config 00:02:40.765 net/dpaa2: not in enabled drivers build config 00:02:40.765 net/e1000: not in enabled drivers build config 00:02:40.765 net/ena: not in enabled drivers build config 00:02:40.765 net/enetc: not in enabled drivers build config 00:02:40.765 net/enetfec: not in enabled drivers build config 00:02:40.765 net/enic: not in enabled drivers build config 00:02:40.765 net/failsafe: not in enabled drivers build config 00:02:40.765 net/fm10k: not in enabled drivers build config 00:02:40.765 net/gve: not in enabled drivers build config 00:02:40.765 net/hinic: not in enabled drivers build config 00:02:40.765 net/hns3: not in enabled drivers build config 00:02:40.765 net/iavf: not in enabled drivers build config 00:02:40.765 net/ice: not in enabled drivers build config 00:02:40.765 net/idpf: not in enabled drivers build config 00:02:40.765 net/igc: not in enabled drivers build config 00:02:40.765 net/ionic: not in enabled drivers build config 00:02:40.765 net/ipn3ke: not in enabled drivers build config 00:02:40.765 net/ixgbe: not in enabled drivers build config 00:02:40.765 net/mana: not in enabled drivers build config 00:02:40.765 net/memif: not in enabled drivers build config 00:02:40.765 net/mlx4: not in enabled drivers build config 00:02:40.765 net/mlx5: not in enabled drivers build config 00:02:40.765 net/mvneta: not in enabled drivers build config 00:02:40.765 net/mvpp2: not in enabled drivers build config 00:02:40.765 net/netvsc: not in enabled drivers build config 00:02:40.765 net/nfb: not in enabled drivers build config 00:02:40.766 net/nfp: not in enabled drivers build config 00:02:40.766 net/ngbe: not in enabled drivers build config 00:02:40.766 net/null: not in enabled drivers build config 00:02:40.766 net/octeontx: not in enabled drivers build config 00:02:40.766 net/octeon_ep: not in enabled drivers build config 00:02:40.766 net/pcap: not in enabled drivers build config 00:02:40.766 net/pfe: not in enabled drivers build config 00:02:40.766 net/qede: not in enabled drivers build config 00:02:40.766 net/ring: not in enabled drivers build config 00:02:40.766 net/sfc: not in enabled drivers build config 00:02:40.766 net/softnic: not in enabled drivers build config 00:02:40.766 net/tap: not in enabled drivers build config 00:02:40.766 net/thunderx: not in enabled drivers build config 00:02:40.766 net/txgbe: not in enabled drivers build config 00:02:40.766 net/vdev_netvsc: not in enabled drivers build config 00:02:40.766 net/vhost: not in enabled drivers build config 00:02:40.766 net/virtio: not in enabled drivers build config 00:02:40.766 net/vmxnet3: not in enabled drivers build config 00:02:40.766 raw/cnxk_bphy: not in enabled drivers build config 00:02:40.766 raw/cnxk_gpio: not in enabled drivers build config 00:02:40.766 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:40.766 raw/ifpga: not in enabled drivers build config 00:02:40.766 raw/ntb: not in enabled drivers build config 00:02:40.766 raw/skeleton: not in enabled drivers build config 00:02:40.766 crypto/armv8: not in enabled drivers build config 00:02:40.766 crypto/bcmfs: not in enabled drivers build config 00:02:40.766 crypto/caam_jr: not in enabled drivers build config 00:02:40.766 crypto/ccp: not in enabled drivers build config 00:02:40.766 crypto/cnxk: not in enabled drivers build config 00:02:40.766 crypto/dpaa_sec: not in enabled drivers build config 00:02:40.766 crypto/dpaa2_sec: not in enabled drivers build config 00:02:40.766 crypto/ipsec_mb: not in enabled drivers build config 00:02:40.766 crypto/mlx5: not in enabled drivers build config 00:02:40.766 crypto/mvsam: not in enabled drivers build config 00:02:40.766 crypto/nitrox: not in enabled drivers build config 00:02:40.766 crypto/null: not in enabled drivers build config 00:02:40.766 crypto/octeontx: not in enabled drivers build config 00:02:40.766 crypto/openssl: not in enabled drivers build config 00:02:40.766 crypto/scheduler: not in enabled drivers build config 00:02:40.766 crypto/uadk: not in enabled drivers build config 00:02:40.766 crypto/virtio: not in enabled drivers build config 00:02:40.766 compress/isal: not in enabled drivers build config 00:02:40.766 compress/mlx5: not in enabled drivers build config 00:02:40.766 compress/octeontx: not in enabled drivers build config 00:02:40.766 compress/zlib: not in enabled drivers build config 00:02:40.766 regex/mlx5: not in enabled drivers build config 00:02:40.766 regex/cn9k: not in enabled drivers build config 00:02:40.766 ml/cnxk: not in enabled drivers build config 00:02:40.766 vdpa/ifc: not in enabled drivers build config 00:02:40.766 vdpa/mlx5: not in enabled drivers build config 00:02:40.766 vdpa/nfp: not in enabled drivers build config 00:02:40.766 vdpa/sfc: not in enabled drivers build config 00:02:40.766 event/cnxk: not in enabled drivers build config 00:02:40.766 event/dlb2: not in enabled drivers build config 00:02:40.766 event/dpaa: not in enabled drivers build config 00:02:40.766 event/dpaa2: not in enabled drivers build config 00:02:40.766 event/dsw: not in enabled drivers build config 00:02:40.766 event/opdl: not in enabled drivers build config 00:02:40.766 event/skeleton: not in enabled drivers build config 00:02:40.766 event/sw: not in enabled drivers build config 00:02:40.766 event/octeontx: not in enabled drivers build config 00:02:40.766 baseband/acc: not in enabled drivers build config 00:02:40.766 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:40.766 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:40.766 baseband/la12xx: not in enabled drivers build config 00:02:40.766 baseband/null: not in enabled drivers build config 00:02:40.766 baseband/turbo_sw: not in enabled drivers build config 00:02:40.766 gpu/cuda: not in enabled drivers build config 00:02:40.766 00:02:40.766 00:02:40.766 Build targets in project: 215 00:02:40.766 00:02:40.766 DPDK 23.11.0 00:02:40.766 00:02:40.766 User defined options 00:02:40.766 libdir : lib 00:02:40.766 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:40.766 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:40.766 c_link_args : 00:02:40.766 enable_docs : false 00:02:40.766 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:40.766 enable_kmods : false 00:02:40.766 machine : native 00:02:40.766 tests : false 00:02:40.766 00:02:40.766 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:40.766 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:40.766 08:46:34 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:40.766 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:40.766 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:40.766 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:40.766 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:40.766 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:41.028 [5/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:41.028 [6/705] Linking static target lib/librte_kvargs.a 00:02:41.028 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:41.028 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:41.028 [9/705] Linking static target lib/librte_log.a 00:02:41.028 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:41.028 [11/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.028 [12/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:41.288 [13/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:41.288 [14/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:41.288 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:41.288 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:41.549 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.549 [18/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:41.549 [19/705] Linking target lib/librte_log.so.24.0 00:02:41.549 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:41.549 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:41.549 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:41.549 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:41.549 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:41.810 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:41.810 [26/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:41.810 [27/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:41.810 [28/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:41.810 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:41.810 [30/705] Linking static target lib/librte_telemetry.a 00:02:41.810 [31/705] Linking target lib/librte_kvargs.so.24.0 00:02:41.810 [32/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:42.071 [33/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:42.071 [34/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:42.071 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:42.071 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:42.071 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:42.071 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:42.071 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:42.071 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:42.071 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:42.332 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.332 [43/705] Linking target lib/librte_telemetry.so.24.0 00:02:42.332 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:42.332 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:42.332 [46/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:42.332 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:42.592 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:42.592 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:42.592 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:42.592 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:42.592 [52/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:42.592 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:42.592 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:42.592 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:42.854 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:42.854 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:42.854 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:42.854 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:42.854 [60/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.854 [61/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:42.854 [62/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:42.854 [63/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:42.854 [64/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:43.116 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:43.116 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:43.116 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:43.116 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:43.116 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:43.117 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:43.117 [71/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:43.117 [72/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:43.117 [73/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:43.377 [74/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:43.377 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:43.377 [76/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:43.377 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.377 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:43.639 [79/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:43.639 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.639 [81/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:43.639 [82/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.639 [83/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:43.639 [84/705] Linking static target lib/librte_ring.a 00:02:43.639 [85/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:43.899 [86/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:43.899 [87/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:43.899 [88/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.899 [89/705] Linking static target lib/librte_eal.a 00:02:43.899 [90/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:43.899 [91/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:43.899 [92/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.159 [93/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:44.159 [94/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.159 [95/705] Linking static target lib/librte_mempool.a 00:02:44.436 [96/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.436 [97/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.436 [98/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.436 [99/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.436 [100/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.436 [101/705] Linking static target lib/librte_rcu.a 00:02:44.436 [102/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:44.436 [103/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:44.697 [104/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.697 [105/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:44.697 [106/705] Linking static target lib/librte_meter.a 00:02:44.697 [107/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.959 [108/705] Linking static target lib/librte_mbuf.a 00:02:44.959 [109/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:44.959 [110/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.959 [111/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:44.959 [112/705] Linking static target lib/librte_net.a 00:02:44.959 [113/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:44.959 [114/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:44.959 [115/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.221 [116/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:45.221 [117/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.483 [118/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:45.483 [119/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.745 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:45.745 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:45.745 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:45.745 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:45.745 [124/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:46.007 [125/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:46.007 [126/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:46.007 [127/705] Linking static target lib/librte_pci.a 00:02:46.007 [128/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:46.007 [129/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:46.008 [130/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:46.008 [131/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:46.008 [132/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.272 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:46.272 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:46.272 [135/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:46.272 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:46.272 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:46.272 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:46.272 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:46.272 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:46.272 [141/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:46.272 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:46.272 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:46.272 [144/705] Linking static target lib/librte_cmdline.a 00:02:46.532 [145/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:46.532 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:46.532 [147/705] Linking static target lib/librte_metrics.a 00:02:46.532 [148/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:46.794 [149/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:46.794 [150/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:46.794 [151/705] Linking static target lib/librte_timer.a 00:02:46.794 [152/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.054 [153/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.054 [154/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:47.054 [155/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.316 [156/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:47.316 [157/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:47.316 [158/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:47.316 [159/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:47.577 [160/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:47.839 [161/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:47.839 [162/705] Linking static target lib/librte_bitratestats.a 00:02:47.839 [163/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:47.839 [164/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.099 [165/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:48.099 [166/705] Linking static target lib/librte_bbdev.a 00:02:48.099 [167/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:48.361 [168/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:48.361 [169/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:48.361 [170/705] Linking static target lib/acl/libavx2_tmp.a 00:02:48.361 [171/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:48.361 [172/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:48.361 [173/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:48.361 [174/705] Linking static target lib/librte_hash.a 00:02:48.361 [175/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.622 [176/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:48.622 [177/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:48.622 [178/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.884 [179/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:48.884 [180/705] Linking static target lib/librte_cfgfile.a 00:02:48.884 [181/705] Linking target lib/librte_eal.so.24.0 00:02:48.884 [182/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:48.884 [183/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.884 [184/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:48.884 [185/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:48.884 [186/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:48.884 [187/705] Linking static target lib/librte_ethdev.a 00:02:48.884 [188/705] Linking target lib/librte_meter.so.24.0 00:02:48.884 [189/705] Linking target lib/librte_ring.so.24.0 00:02:49.152 [190/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:49.152 [191/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:49.152 [192/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:49.152 [193/705] Linking target lib/librte_pci.so.24.0 00:02:49.152 [194/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:49.152 [195/705] Linking target lib/librte_rcu.so.24.0 00:02:49.152 [196/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.152 [197/705] Linking target lib/librte_timer.so.24.0 00:02:49.152 [198/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:49.152 [199/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:49.152 [200/705] Linking static target lib/librte_acl.a 00:02:49.152 [201/705] Linking target lib/librte_mempool.so.24.0 00:02:49.152 [202/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:49.152 [203/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:49.152 [204/705] Linking target lib/librte_cfgfile.so.24.0 00:02:49.152 [205/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:49.152 [206/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:49.152 [207/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:49.152 [208/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:49.152 [209/705] Linking target lib/librte_mbuf.so.24.0 00:02:49.152 [210/705] Linking static target lib/librte_bpf.a 00:02:49.439 [211/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:49.439 [212/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.439 [213/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:49.439 [214/705] Linking static target lib/librte_compressdev.a 00:02:49.439 [215/705] Linking target lib/librte_bbdev.so.24.0 00:02:49.439 [216/705] Linking target lib/librte_acl.so.24.0 00:02:49.439 [217/705] Linking target lib/librte_net.so.24.0 00:02:49.439 [218/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.439 [219/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:49.439 [220/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:49.439 [221/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:49.439 [222/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:49.439 [223/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:49.439 [224/705] Linking target lib/librte_cmdline.so.24.0 00:02:49.439 [225/705] Linking target lib/librte_hash.so.24.0 00:02:49.700 [226/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:49.700 [227/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.700 [228/705] Linking target lib/librte_compressdev.so.24.0 00:02:49.700 [229/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:49.960 [230/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:49.960 [231/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:49.960 [232/705] Linking static target lib/librte_distributor.a 00:02:49.960 [233/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:49.960 [234/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.960 [235/705] Linking target lib/librte_distributor.so.24.0 00:02:50.221 [236/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:50.221 [237/705] Linking static target lib/librte_dmadev.a 00:02:50.221 [238/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:50.500 [239/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.500 [240/705] Linking target lib/librte_dmadev.so.24.0 00:02:50.500 [241/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:50.500 [242/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:50.500 [243/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:50.763 [244/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:50.763 [245/705] Linking static target lib/librte_efd.a 00:02:50.763 [246/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:50.763 [247/705] Linking static target lib/librte_cryptodev.a 00:02:50.763 [248/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:50.763 [249/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.763 [250/705] Linking target lib/librte_efd.so.24.0 00:02:51.024 [251/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:51.024 [252/705] Linking static target lib/librte_gpudev.a 00:02:51.024 [253/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:51.024 [254/705] Linking static target lib/librte_dispatcher.a 00:02:51.024 [255/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:51.024 [256/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:51.304 [257/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:51.304 [258/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:51.574 [259/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.574 [260/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.574 [261/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:51.574 [262/705] Linking target lib/librte_cryptodev.so.24.0 00:02:51.574 [263/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:51.574 [264/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:51.835 [265/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.835 [266/705] Linking target lib/librte_gpudev.so.24.0 00:02:51.835 [267/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:51.835 [268/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:51.835 [269/705] Linking static target lib/librte_gro.a 00:02:51.835 [270/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:51.835 [271/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:51.835 [272/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:51.835 [273/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.097 [274/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:52.097 [275/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:52.097 [276/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:52.097 [277/705] Linking static target lib/librte_gso.a 00:02:52.097 [278/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:52.097 [279/705] Linking static target lib/librte_eventdev.a 00:02:52.097 [280/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:52.097 [281/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.097 [282/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:52.359 [283/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:52.359 [284/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:52.359 [285/705] Linking static target lib/librte_jobstats.a 00:02:52.359 [286/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:52.359 [287/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:52.359 [288/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:52.359 [289/705] Linking static target lib/librte_ip_frag.a 00:02:52.620 [290/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:52.620 [291/705] Linking static target lib/librte_latencystats.a 00:02:52.620 [292/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.620 [293/705] Linking target lib/librte_jobstats.so.24.0 00:02:52.620 [294/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.620 [295/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.620 [296/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:52.620 [297/705] Linking target lib/librte_ethdev.so.24.0 00:02:52.620 [298/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:52.882 [299/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.882 [300/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:52.882 [301/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:52.882 [302/705] Linking target lib/librte_metrics.so.24.0 00:02:52.882 [303/705] Linking target lib/librte_bpf.so.24.0 00:02:52.882 [304/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:52.882 [305/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:53.143 [306/705] Linking target lib/librte_bitratestats.so.24.0 00:02:53.143 [307/705] Linking target lib/librte_gro.so.24.0 00:02:53.143 [308/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:53.143 [309/705] Linking target lib/librte_gso.so.24.0 00:02:53.143 [310/705] Linking static target lib/librte_lpm.a 00:02:53.143 [311/705] Linking target lib/librte_ip_frag.so.24.0 00:02:53.143 [312/705] Linking target lib/librte_latencystats.so.24.0 00:02:53.143 [313/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:53.143 [314/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:53.143 [315/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:53.143 [316/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:53.143 [317/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:53.143 [318/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:53.143 [319/705] Linking static target lib/librte_pcapng.a 00:02:53.405 [320/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:53.405 [321/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.405 [322/705] Linking target lib/librte_lpm.so.24.0 00:02:53.405 [323/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.405 [324/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:53.405 [325/705] Linking target lib/librte_pcapng.so.24.0 00:02:53.405 [326/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:53.405 [327/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:53.405 [328/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:53.667 [329/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.667 [330/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:53.667 [331/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.667 [332/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:53.667 [333/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.667 [334/705] Linking target lib/librte_eventdev.so.24.0 00:02:53.929 [335/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.929 [336/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:53.929 [337/705] Linking target lib/librte_dispatcher.so.24.0 00:02:53.929 [338/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:53.929 [339/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:53.929 [340/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:53.929 [341/705] Linking static target lib/librte_regexdev.a 00:02:53.929 [342/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.929 [343/705] Linking static target lib/librte_power.a 00:02:53.929 [344/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:53.929 [345/705] Linking static target lib/librte_rawdev.a 00:02:53.929 [346/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:53.929 [347/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:54.189 [348/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:54.189 [349/705] Linking static target lib/librte_mldev.a 00:02:54.189 [350/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:54.189 [351/705] Linking static target lib/librte_member.a 00:02:54.189 [352/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:54.189 [353/705] Linking static target lib/librte_reorder.a 00:02:54.450 [354/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:54.450 [355/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.450 [356/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:54.450 [357/705] Linking target lib/librte_rawdev.so.24.0 00:02:54.450 [358/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:54.450 [359/705] Linking static target lib/librte_rib.a 00:02:54.450 [360/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.450 [361/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:54.450 [362/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.450 [363/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.450 [364/705] Linking target lib/librte_member.so.24.0 00:02:54.450 [365/705] Linking target lib/librte_power.so.24.0 00:02:54.450 [366/705] Linking target lib/librte_regexdev.so.24.0 00:02:54.711 [367/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:54.712 [368/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.712 [369/705] Linking target lib/librte_reorder.so.24.0 00:02:54.712 [370/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:54.712 [371/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:54.712 [372/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:54.712 [373/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.712 [374/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:54.712 [375/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:54.712 [376/705] Linking static target lib/librte_stack.a 00:02:54.712 [377/705] Linking target lib/librte_rib.so.24.0 00:02:54.972 [378/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:54.972 [379/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:54.973 [380/705] Linking static target lib/librte_security.a 00:02:54.973 [381/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.973 [382/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.973 [383/705] Linking target lib/librte_stack.so.24.0 00:02:55.233 [384/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.233 [385/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:55.233 [386/705] Linking target lib/librte_mldev.so.24.0 00:02:55.233 [387/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:55.233 [388/705] Linking static target lib/librte_sched.a 00:02:55.233 [389/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.233 [390/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:55.494 [391/705] Linking target lib/librte_security.so.24.0 00:02:55.494 [392/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:55.494 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:55.755 [394/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.755 [395/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:55.755 [396/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:55.755 [397/705] Linking target lib/librte_sched.so.24.0 00:02:55.755 [398/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:55.755 [399/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:56.016 [400/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:56.016 [401/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:56.016 [402/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:56.305 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:56.305 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:56.305 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:56.593 [406/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:56.593 [407/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:56.593 [408/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:56.593 [409/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:56.853 [410/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:56.853 [411/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:56.853 [412/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:56.853 [413/705] Linking static target lib/librte_ipsec.a 00:02:56.853 [414/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:56.853 [415/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:57.113 [416/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.113 [417/705] Linking target lib/librte_ipsec.so.24.0 00:02:57.113 [418/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:57.113 [419/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:57.375 [420/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:57.375 [421/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:57.375 [422/705] Linking static target lib/librte_fib.a 00:02:57.375 [423/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:57.375 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:57.375 [425/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:57.375 [426/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:57.637 [427/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.637 [428/705] Linking target lib/librte_fib.so.24.0 00:02:57.637 [429/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:57.637 [430/705] Linking static target lib/librte_pdcp.a 00:02:57.896 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.896 [432/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:57.896 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:57.896 [434/705] Linking target lib/librte_pdcp.so.24.0 00:02:57.896 [435/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:58.167 [436/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:58.167 [437/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:58.167 [438/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:58.425 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:58.425 [440/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:58.425 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:58.425 [442/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:58.425 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:58.683 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:58.683 [445/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:58.683 [446/705] Linking static target lib/librte_port.a 00:02:58.683 [447/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:58.683 [448/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:58.683 [449/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:58.683 [450/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:58.683 [451/705] Linking static target lib/librte_pdump.a 00:02:58.942 [452/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.942 [453/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.942 [454/705] Linking target lib/librte_port.so.24.0 00:02:58.942 [455/705] Linking target lib/librte_pdump.so.24.0 00:02:58.942 [456/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.200 [457/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:59.200 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:59.200 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:59.201 [460/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:59.201 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:59.459 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:59.459 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:59.459 [464/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:59.459 [465/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:59.459 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:59.459 [467/705] Linking static target lib/librte_table.a 00:03:00.026 [468/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:00.026 [469/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:00.026 [470/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.026 [471/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:00.026 [472/705] Linking target lib/librte_table.so.24.0 00:03:00.026 [473/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:00.026 [474/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:00.026 [475/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:00.026 [476/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:00.284 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:00.284 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:00.284 [479/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:00.284 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:00.542 [481/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:00.542 [482/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:00.542 [483/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:00.542 [484/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:00.801 [485/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:00.801 [486/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:00.801 [487/705] Linking static target lib/librte_graph.a 00:03:00.801 [488/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:01.059 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:01.315 [490/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:01.315 [491/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:01.315 [492/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:01.315 [493/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:01.315 [494/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.315 [495/705] Linking target lib/librte_graph.so.24.0 00:03:01.315 [496/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:01.574 [497/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:01.574 [498/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:01.574 [499/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:01.574 [500/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:01.574 [501/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:01.574 [502/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:01.832 [503/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:01.832 [504/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:01.832 [505/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:01.832 [506/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:01.832 [507/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:01.832 [508/705] Linking static target lib/librte_node.a 00:03:02.090 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:02.090 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:02.090 [511/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:02.090 [512/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:02.090 [513/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.090 [514/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:02.090 [515/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:02.090 [516/705] Linking target lib/librte_node.so.24.0 00:03:02.090 [517/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:02.349 [518/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.349 [519/705] Linking static target drivers/librte_bus_pci.a 00:03:02.349 [520/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:02.349 [521/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.349 [522/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.349 [523/705] Linking static target drivers/librte_bus_vdev.a 00:03:02.349 [524/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:02.349 [525/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.349 [526/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:02.349 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:02.608 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.608 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:03:02.608 [530/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:02.608 [531/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:02.608 [532/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.608 [533/705] Linking target drivers/librte_bus_pci.so.24.0 00:03:02.608 [534/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:02.608 [535/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:02.608 [536/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:02.608 [537/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:02.608 [538/705] Linking static target drivers/librte_mempool_ring.a 00:03:02.608 [539/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:02.867 [540/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:02.867 [541/705] Linking target drivers/librte_mempool_ring.so.24.0 00:03:03.124 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:03.124 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:03.688 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:03.688 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:03.688 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:03.947 [547/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:03.947 [548/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:03.947 [549/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:03.947 [550/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:03.947 [551/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:03.947 [552/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:04.514 [553/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:04.514 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:04.514 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:04.514 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:04.514 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:04.772 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:04.772 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:05.030 [560/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:05.030 [561/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:05.030 [562/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:05.288 [563/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:05.288 [564/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:05.288 [565/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:05.288 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:05.288 [567/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:05.546 [568/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:05.546 [569/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:05.546 [570/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:05.546 [571/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:05.546 [572/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:05.546 [573/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:05.803 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:05.803 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:05.804 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:05.804 [577/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:05.804 [578/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:06.062 [579/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:06.062 [580/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:06.062 [581/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:06.062 [582/705] Linking static target drivers/librte_net_i40e.a 00:03:06.320 [583/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:06.320 [584/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:06.320 [585/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:06.320 [586/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:06.320 [587/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:06.320 [588/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:06.579 [589/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.579 [590/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:06.579 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:06.579 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:06.837 [593/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:06.837 [594/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:06.837 [595/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:06.837 [596/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:07.095 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:07.095 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:07.352 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:07.352 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:07.352 [601/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:07.352 [602/705] Linking static target lib/librte_vhost.a 00:03:07.352 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:07.352 [604/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:07.611 [605/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:07.611 [606/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:07.611 [607/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:07.611 [608/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:07.611 [609/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:07.611 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:07.611 [611/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:07.870 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:07.870 [613/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:08.128 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:08.128 [615/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.128 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:08.128 [617/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:08.128 [618/705] Linking target lib/librte_vhost.so.24.0 00:03:08.693 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:08.693 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:08.693 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:08.693 [622/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:08.693 [623/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:08.952 [624/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:08.952 [625/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:08.952 [626/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:08.952 [627/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:08.952 [628/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:08.952 [629/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:08.952 [630/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:09.210 [631/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:09.210 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:09.467 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:09.467 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:09.467 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:09.467 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:09.467 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:09.467 [638/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:09.467 [639/705] Linking static target lib/librte_pipeline.a 00:03:09.467 [640/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:09.724 [641/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:09.725 [642/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:09.725 [643/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:09.725 [644/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:09.725 [645/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:09.725 [646/705] Linking target app/dpdk-dumpcap 00:03:09.725 [647/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:09.982 [648/705] Linking target app/dpdk-graph 00:03:09.982 [649/705] Linking target app/dpdk-pdump 00:03:09.982 [650/705] Linking target app/dpdk-test-acl 00:03:09.982 [651/705] Linking target app/dpdk-proc-info 00:03:09.982 [652/705] Linking target app/dpdk-test-cmdline 00:03:10.240 [653/705] Linking target app/dpdk-test-crypto-perf 00:03:10.240 [654/705] Linking target app/dpdk-test-compress-perf 00:03:10.240 [655/705] Linking target app/dpdk-test-dma-perf 00:03:10.240 [656/705] Linking target app/dpdk-test-fib 00:03:10.240 [657/705] Linking target app/dpdk-test-eventdev 00:03:10.240 [658/705] Linking target app/dpdk-test-flow-perf 00:03:10.240 [659/705] Linking target app/dpdk-test-gpudev 00:03:10.498 [660/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:10.498 [661/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:10.498 [662/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:10.498 [663/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:10.498 [664/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:10.756 [665/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:10.756 [666/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:10.756 [667/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:10.756 [668/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:11.014 [669/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:11.014 [670/705] Linking target app/dpdk-test-bbdev 00:03:11.014 [671/705] Linking target app/dpdk-test-mldev 00:03:11.014 [672/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:11.014 [673/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:11.271 [674/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:11.271 [675/705] Linking target app/dpdk-test-pipeline 00:03:11.527 [676/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:11.527 [677/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:11.527 [678/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:11.527 [679/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:11.527 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:11.527 [681/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.785 [682/705] Linking target lib/librte_pipeline.so.24.0 00:03:11.785 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:11.785 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:11.785 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:12.042 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:12.042 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:12.042 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:12.042 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:12.300 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:12.300 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:12.557 [692/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:12.557 [693/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:12.815 [694/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:12.815 [695/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:12.815 [696/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:12.815 [697/705] Linking target app/dpdk-test-sad 00:03:13.072 [698/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:13.072 [699/705] Linking target app/dpdk-test-regex 00:03:13.072 [700/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:13.072 [701/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:13.330 [702/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:13.330 [703/705] Linking target app/dpdk-test-security-perf 00:03:13.588 [704/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:13.846 [705/705] Linking target app/dpdk-testpmd 00:03:13.846 08:47:07 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:13.846 08:47:07 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:13.846 08:47:07 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:13.846 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:13.846 [0/1] Installing files. 00:03:14.107 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.108 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:14.109 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.110 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:14.111 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:14.111 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.111 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.370 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.371 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.371 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.371 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.371 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.371 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.371 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.371 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.371 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.371 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.371 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.371 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.371 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.371 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.631 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.631 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.631 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.631 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:14.631 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.631 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:14.632 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.632 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:14.632 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.632 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:14.632 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.632 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.633 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.634 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:14.635 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:14.635 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:14.635 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:14.635 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:14.635 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:14.635 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:14.635 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:14.635 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:14.635 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:14.635 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:14.635 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:14.635 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:14.635 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:14.635 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:14.635 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:14.635 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:14.635 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:14.635 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:14.635 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:14.635 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:14.635 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:14.635 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:14.635 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:14.635 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:14.635 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:14.635 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:14.635 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:14.635 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:14.635 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:14.635 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:14.635 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:14.635 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:14.635 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:14.635 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:14.635 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:14.635 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:14.635 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:14.635 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:14.635 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:14.635 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:14.635 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:14.635 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:14.635 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:14.635 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:14.635 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:14.635 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:14.635 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:14.635 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:14.635 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:14.635 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:14.635 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:14.635 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:14.635 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:14.635 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:14.635 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:14.635 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:14.635 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:14.635 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:14.635 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:14.635 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:14.636 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:14.636 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:14.636 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:14.636 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:14.636 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:14.636 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:14.636 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:14.636 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:14.636 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:14.636 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:14.636 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:14.636 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:14.636 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:14.636 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:14.636 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:14.636 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:14.636 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:14.636 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:14.636 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:14.636 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:14.636 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:14.636 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:14.636 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:14.636 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:14.636 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:14.636 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:14.636 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:14.636 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:14.636 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:14.636 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:14.636 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:14.636 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:14.636 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:14.636 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:14.636 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:14.636 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:14.636 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:14.636 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:14.636 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:14.636 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:14.636 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:14.636 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:14.636 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:14.636 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:14.636 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:14.636 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:14.636 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:14.636 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:14.636 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:14.636 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:14.636 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:14.636 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:14.636 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:14.636 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:14.636 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:14.636 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:14.636 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:14.636 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:14.636 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:14.636 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:14.636 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:14.636 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:14.636 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:14.636 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:14.636 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:14.636 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:14.636 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:14.636 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:14.636 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:14.636 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:14.636 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:14.636 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:14.636 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:14.636 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:14.636 08:47:08 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:14.636 08:47:08 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:14.636 00:03:14.636 real 0m40.574s 00:03:14.636 user 4m36.847s 00:03:14.636 sys 0m42.806s 00:03:14.636 08:47:08 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:14.636 08:47:08 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:14.636 ************************************ 00:03:14.636 END TEST build_native_dpdk 00:03:14.636 ************************************ 00:03:14.636 08:47:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:14.636 08:47:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:14.636 08:47:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:14.636 08:47:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:14.636 08:47:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:14.636 08:47:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:14.636 08:47:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:14.636 08:47:08 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:14.895 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:14.895 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.895 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:14.895 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:15.462 Using 'verbs' RDMA provider 00:03:28.607 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:38.581 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:38.581 Creating mk/config.mk...done. 00:03:38.581 Creating mk/cc.flags.mk...done. 00:03:38.581 Type 'make' to build. 00:03:38.582 08:47:32 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:38.582 08:47:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:38.582 08:47:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:38.582 08:47:32 -- common/autotest_common.sh@10 -- $ set +x 00:03:38.582 ************************************ 00:03:38.582 START TEST make 00:03:38.582 ************************************ 00:03:38.582 08:47:32 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:38.839 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:38.839 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:38.839 meson setup builddir \ 00:03:38.839 -Dwith-libaio=enabled \ 00:03:38.839 -Dwith-liburing=enabled \ 00:03:38.839 -Dwith-libvfn=disabled \ 00:03:38.839 -Dwith-spdk=false && \ 00:03:38.839 meson compile -C builddir && \ 00:03:38.839 cd -) 00:03:38.839 make[1]: Nothing to be done for 'all'. 00:03:41.371 The Meson build system 00:03:41.371 Version: 1.5.0 00:03:41.371 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:41.371 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:41.371 Build type: native build 00:03:41.371 Project name: xnvme 00:03:41.371 Project version: 0.7.3 00:03:41.371 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:41.371 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:41.371 Host machine cpu family: x86_64 00:03:41.371 Host machine cpu: x86_64 00:03:41.371 Message: host_machine.system: linux 00:03:41.371 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:41.371 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:41.371 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:41.371 Run-time dependency threads found: YES 00:03:41.371 Has header "setupapi.h" : NO 00:03:41.371 Has header "linux/blkzoned.h" : YES 00:03:41.371 Has header "linux/blkzoned.h" : YES (cached) 00:03:41.371 Has header "libaio.h" : YES 00:03:41.371 Library aio found: YES 00:03:41.371 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:41.371 Run-time dependency liburing found: YES 2.2 00:03:41.371 Dependency libvfn skipped: feature with-libvfn disabled 00:03:41.372 Run-time dependency appleframeworks found: NO (tried framework) 00:03:41.372 Run-time dependency appleframeworks found: NO (tried framework) 00:03:41.372 Configuring xnvme_config.h using configuration 00:03:41.372 Configuring xnvme.spec using configuration 00:03:41.372 Run-time dependency bash-completion found: YES 2.11 00:03:41.372 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:41.372 Program cp found: YES (/usr/bin/cp) 00:03:41.372 Has header "winsock2.h" : NO 00:03:41.372 Has header "dbghelp.h" : NO 00:03:41.372 Library rpcrt4 found: NO 00:03:41.372 Library rt found: YES 00:03:41.372 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:41.372 Found CMake: /usr/bin/cmake (3.27.7) 00:03:41.372 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:41.372 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:41.372 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:41.372 Build targets in project: 32 00:03:41.372 00:03:41.372 xnvme 0.7.3 00:03:41.372 00:03:41.372 User defined options 00:03:41.372 with-libaio : enabled 00:03:41.372 with-liburing: enabled 00:03:41.372 with-libvfn : disabled 00:03:41.372 with-spdk : false 00:03:41.372 00:03:41.372 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:41.940 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:41.940 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:41.940 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:41.940 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:41.940 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:41.940 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:41.940 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:41.940 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:42.201 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:42.201 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:42.201 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:42.201 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:42.201 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:42.201 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:42.201 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:42.201 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:42.201 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:42.201 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:42.201 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:42.201 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:42.201 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:42.201 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:42.201 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:42.201 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:42.201 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:42.201 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:42.201 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:42.201 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:42.462 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:42.462 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:42.462 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:42.462 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:42.462 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:42.462 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:42.462 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:42.462 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:42.462 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:42.462 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:42.462 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:42.462 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:42.462 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:42.462 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:42.462 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:42.462 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:42.462 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:42.462 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:42.462 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:42.462 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:42.462 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:42.462 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:42.463 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:42.463 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:42.463 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:42.463 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:42.463 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:42.463 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:42.463 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:42.463 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:42.463 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:42.463 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:42.463 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:42.721 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:42.721 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:42.721 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:42.721 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:42.721 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:42.721 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:42.721 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:42.721 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:42.721 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:42.721 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:42.721 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:42.721 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:42.722 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:42.722 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:42.722 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:42.722 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:42.722 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:42.722 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:42.979 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:42.979 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:42.979 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:42.979 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:42.979 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:42.979 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:42.979 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:42.979 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:42.979 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:42.979 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:42.979 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:42.979 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:42.979 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:42.979 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:42.979 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:42.979 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:42.979 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:43.237 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:43.237 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:43.237 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:43.237 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:43.237 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:43.237 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:43.237 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:43.237 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:43.237 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:43.237 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:43.237 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:43.237 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:43.237 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:43.237 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:43.237 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:43.237 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:43.237 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:43.237 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:43.237 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:43.237 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:43.237 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:43.237 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:43.237 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:43.237 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:43.237 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:43.237 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:43.237 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:43.237 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:43.237 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:43.237 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:43.238 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:43.238 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:43.238 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:43.496 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:43.496 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:43.496 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:43.496 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:43.496 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:43.496 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:43.496 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:43.496 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:43.496 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:43.496 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:43.496 [139/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:43.496 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:43.496 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:43.496 [142/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:43.496 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:43.496 [144/203] Linking target lib/libxnvme.so 00:03:43.496 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:43.496 [146/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:43.754 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:43.754 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:43.754 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:43.754 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:43.754 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:43.754 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:43.754 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:43.754 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:43.754 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:43.754 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:43.754 [157/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:43.754 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:43.754 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:43.754 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:43.754 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:43.754 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:44.013 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:44.013 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:44.013 [165/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:44.013 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:44.013 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:44.013 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:44.013 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:44.013 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:44.013 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:44.013 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:44.013 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:44.013 [174/203] Linking static target lib/libxnvme.a 00:03:44.013 [175/203] Linking target tests/xnvme_tests_lblk 00:03:44.013 [176/203] Linking target tests/xnvme_tests_buf 00:03:44.013 [177/203] Linking target tests/xnvme_tests_enum 00:03:44.013 [178/203] Linking target tests/xnvme_tests_scc 00:03:44.271 [179/203] Linking target tests/xnvme_tests_async_intf 00:03:44.271 [180/203] Linking target tests/xnvme_tests_xnvme_file 00:03:44.271 [181/203] Linking target tests/xnvme_tests_cli 00:03:44.271 [182/203] Linking target tests/xnvme_tests_znd_append 00:03:44.271 [183/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:44.271 [184/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:44.271 [185/203] Linking target tests/xnvme_tests_ioworker 00:03:44.271 [186/203] Linking target tests/xnvme_tests_znd_state 00:03:44.271 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:44.271 [188/203] Linking target tools/xdd 00:03:44.271 [189/203] Linking target tools/lblk 00:03:44.271 [190/203] Linking target tests/xnvme_tests_kvs 00:03:44.271 [191/203] Linking target tests/xnvme_tests_map 00:03:44.271 [192/203] Linking target tools/xnvme 00:03:44.271 [193/203] Linking target examples/xnvme_enum 00:03:44.271 [194/203] Linking target examples/xnvme_dev 00:03:44.271 [195/203] Linking target tools/zoned 00:03:44.271 [196/203] Linking target tools/kvs 00:03:44.271 [197/203] Linking target tools/xnvme_file 00:03:44.271 [198/203] Linking target examples/xnvme_hello 00:03:44.271 [199/203] Linking target examples/xnvme_single_async 00:03:44.271 [200/203] Linking target examples/xnvme_single_sync 00:03:44.271 [201/203] Linking target examples/xnvme_io_async 00:03:44.271 [202/203] Linking target examples/zoned_io_async 00:03:44.271 [203/203] Linking target examples/zoned_io_sync 00:03:44.271 INFO: autodetecting backend as ninja 00:03:44.271 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:44.271 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:22.981 CC lib/log/log.o 00:04:22.981 CC lib/log/log_deprecated.o 00:04:22.981 CC lib/log/log_flags.o 00:04:22.981 CC lib/ut/ut.o 00:04:22.981 CC lib/ut_mock/mock.o 00:04:22.981 LIB libspdk_log.a 00:04:22.981 LIB libspdk_ut_mock.a 00:04:22.981 SO libspdk_log.so.7.0 00:04:22.981 SO libspdk_ut_mock.so.6.0 00:04:22.981 LIB libspdk_ut.a 00:04:22.981 SO libspdk_ut.so.2.0 00:04:22.981 SYMLINK libspdk_ut_mock.so 00:04:22.981 SYMLINK libspdk_log.so 00:04:22.981 SYMLINK libspdk_ut.so 00:04:22.981 CC lib/util/base64.o 00:04:22.981 CC lib/util/bit_array.o 00:04:22.981 CC lib/util/crc16.o 00:04:22.981 CC lib/util/cpuset.o 00:04:22.981 CC lib/util/crc32.o 00:04:22.981 CC lib/util/crc32c.o 00:04:22.981 CC lib/ioat/ioat.o 00:04:22.981 CC lib/dma/dma.o 00:04:22.981 CXX lib/trace_parser/trace.o 00:04:22.981 CC lib/vfio_user/host/vfio_user_pci.o 00:04:22.981 CC lib/util/crc32_ieee.o 00:04:22.981 CC lib/util/crc64.o 00:04:22.981 CC lib/vfio_user/host/vfio_user.o 00:04:22.981 CC lib/util/dif.o 00:04:22.981 LIB libspdk_dma.a 00:04:22.981 CC lib/util/fd.o 00:04:22.981 SO libspdk_dma.so.5.0 00:04:22.981 CC lib/util/fd_group.o 00:04:22.981 CC lib/util/file.o 00:04:22.981 CC lib/util/hexlify.o 00:04:22.981 SYMLINK libspdk_dma.so 00:04:22.981 CC lib/util/iov.o 00:04:22.981 CC lib/util/math.o 00:04:22.981 LIB libspdk_ioat.a 00:04:22.981 SO libspdk_ioat.so.7.0 00:04:22.981 LIB libspdk_vfio_user.a 00:04:22.981 CC lib/util/net.o 00:04:22.981 SO libspdk_vfio_user.so.5.0 00:04:22.981 SYMLINK libspdk_ioat.so 00:04:22.981 CC lib/util/pipe.o 00:04:22.981 CC lib/util/strerror_tls.o 00:04:22.981 CC lib/util/string.o 00:04:22.981 CC lib/util/uuid.o 00:04:22.981 SYMLINK libspdk_vfio_user.so 00:04:22.981 CC lib/util/xor.o 00:04:22.981 CC lib/util/zipf.o 00:04:22.981 CC lib/util/md5.o 00:04:22.981 LIB libspdk_trace_parser.a 00:04:22.981 LIB libspdk_util.a 00:04:22.981 SO libspdk_trace_parser.so.6.0 00:04:22.981 SO libspdk_util.so.10.0 00:04:22.981 SYMLINK libspdk_trace_parser.so 00:04:22.981 SYMLINK libspdk_util.so 00:04:22.981 CC lib/json/json_parse.o 00:04:22.981 CC lib/json/json_util.o 00:04:22.981 CC lib/json/json_write.o 00:04:22.981 CC lib/rdma_utils/rdma_utils.o 00:04:22.981 CC lib/env_dpdk/env.o 00:04:22.981 CC lib/env_dpdk/memory.o 00:04:22.981 CC lib/vmd/vmd.o 00:04:22.981 CC lib/idxd/idxd.o 00:04:22.981 CC lib/rdma_provider/common.o 00:04:22.981 CC lib/conf/conf.o 00:04:22.981 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:22.981 CC lib/idxd/idxd_user.o 00:04:22.981 LIB libspdk_conf.a 00:04:22.981 CC lib/vmd/led.o 00:04:22.981 SO libspdk_conf.so.6.0 00:04:22.981 LIB libspdk_rdma_utils.a 00:04:22.981 SO libspdk_rdma_utils.so.1.0 00:04:22.981 LIB libspdk_rdma_provider.a 00:04:22.981 LIB libspdk_json.a 00:04:22.981 SYMLINK libspdk_conf.so 00:04:22.981 CC lib/env_dpdk/pci.o 00:04:22.981 SO libspdk_rdma_provider.so.6.0 00:04:22.981 SO libspdk_json.so.6.0 00:04:22.981 SYMLINK libspdk_rdma_utils.so 00:04:22.981 CC lib/idxd/idxd_kernel.o 00:04:22.981 SYMLINK libspdk_rdma_provider.so 00:04:22.981 CC lib/env_dpdk/init.o 00:04:22.981 CC lib/env_dpdk/threads.o 00:04:22.981 SYMLINK libspdk_json.so 00:04:22.981 CC lib/env_dpdk/pci_ioat.o 00:04:22.981 CC lib/env_dpdk/pci_virtio.o 00:04:22.981 CC lib/env_dpdk/pci_vmd.o 00:04:22.981 CC lib/jsonrpc/jsonrpc_server.o 00:04:22.981 CC lib/env_dpdk/pci_idxd.o 00:04:22.981 CC lib/env_dpdk/pci_event.o 00:04:22.981 CC lib/env_dpdk/sigbus_handler.o 00:04:22.981 CC lib/env_dpdk/pci_dpdk.o 00:04:22.981 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:22.981 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:22.981 LIB libspdk_idxd.a 00:04:22.981 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:22.981 SO libspdk_idxd.so.12.1 00:04:22.981 CC lib/jsonrpc/jsonrpc_client.o 00:04:22.981 LIB libspdk_vmd.a 00:04:22.981 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:22.981 SO libspdk_vmd.so.6.0 00:04:22.981 SYMLINK libspdk_idxd.so 00:04:22.981 SYMLINK libspdk_vmd.so 00:04:22.981 LIB libspdk_jsonrpc.a 00:04:22.981 SO libspdk_jsonrpc.so.6.0 00:04:22.981 SYMLINK libspdk_jsonrpc.so 00:04:22.981 CC lib/rpc/rpc.o 00:04:22.981 LIB libspdk_env_dpdk.a 00:04:22.981 LIB libspdk_rpc.a 00:04:22.981 SO libspdk_rpc.so.6.0 00:04:22.981 SO libspdk_env_dpdk.so.15.0 00:04:22.981 SYMLINK libspdk_rpc.so 00:04:22.981 SYMLINK libspdk_env_dpdk.so 00:04:22.981 CC lib/notify/notify_rpc.o 00:04:22.981 CC lib/keyring/keyring_rpc.o 00:04:22.981 CC lib/notify/notify.o 00:04:22.981 CC lib/keyring/keyring.o 00:04:22.981 CC lib/trace/trace.o 00:04:22.981 CC lib/trace/trace_flags.o 00:04:22.981 CC lib/trace/trace_rpc.o 00:04:22.981 LIB libspdk_notify.a 00:04:22.981 SO libspdk_notify.so.6.0 00:04:22.981 SYMLINK libspdk_notify.so 00:04:22.981 LIB libspdk_keyring.a 00:04:22.981 LIB libspdk_trace.a 00:04:22.981 SO libspdk_keyring.so.2.0 00:04:22.981 SO libspdk_trace.so.11.0 00:04:22.981 SYMLINK libspdk_trace.so 00:04:22.981 SYMLINK libspdk_keyring.so 00:04:22.981 CC lib/sock/sock_rpc.o 00:04:22.981 CC lib/thread/thread.o 00:04:22.981 CC lib/sock/sock.o 00:04:22.981 CC lib/thread/iobuf.o 00:04:22.981 LIB libspdk_sock.a 00:04:22.981 SO libspdk_sock.so.10.0 00:04:22.981 SYMLINK libspdk_sock.so 00:04:22.981 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:22.981 CC lib/nvme/nvme_fabric.o 00:04:22.981 CC lib/nvme/nvme_ctrlr.o 00:04:22.981 CC lib/nvme/nvme_ns.o 00:04:22.981 CC lib/nvme/nvme_qpair.o 00:04:22.981 CC lib/nvme/nvme_ns_cmd.o 00:04:22.981 CC lib/nvme/nvme_pcie.o 00:04:22.981 CC lib/nvme/nvme.o 00:04:22.981 CC lib/nvme/nvme_pcie_common.o 00:04:22.981 CC lib/nvme/nvme_quirks.o 00:04:22.981 CC lib/nvme/nvme_transport.o 00:04:22.981 CC lib/nvme/nvme_discovery.o 00:04:22.981 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:22.981 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:22.981 LIB libspdk_thread.a 00:04:22.981 SO libspdk_thread.so.10.1 00:04:22.981 CC lib/nvme/nvme_tcp.o 00:04:22.981 SYMLINK libspdk_thread.so 00:04:22.981 CC lib/nvme/nvme_opal.o 00:04:22.981 CC lib/accel/accel.o 00:04:22.981 CC lib/nvme/nvme_io_msg.o 00:04:22.981 CC lib/nvme/nvme_poll_group.o 00:04:22.981 CC lib/nvme/nvme_zns.o 00:04:22.981 CC lib/nvme/nvme_stubs.o 00:04:22.981 CC lib/nvme/nvme_auth.o 00:04:22.981 CC lib/nvme/nvme_cuse.o 00:04:22.981 CC lib/nvme/nvme_rdma.o 00:04:22.981 CC lib/blob/blobstore.o 00:04:22.981 CC lib/blob/request.o 00:04:23.239 CC lib/blob/zeroes.o 00:04:23.239 CC lib/blob/blob_bs_dev.o 00:04:23.239 CC lib/accel/accel_rpc.o 00:04:23.239 CC lib/accel/accel_sw.o 00:04:23.497 CC lib/fsdev/fsdev.o 00:04:23.497 CC lib/fsdev/fsdev_io.o 00:04:23.497 CC lib/virtio/virtio.o 00:04:23.497 CC lib/init/json_config.o 00:04:23.497 LIB libspdk_accel.a 00:04:23.755 SO libspdk_accel.so.16.0 00:04:23.755 CC lib/fsdev/fsdev_rpc.o 00:04:23.755 CC lib/init/subsystem.o 00:04:23.755 SYMLINK libspdk_accel.so 00:04:23.755 CC lib/init/subsystem_rpc.o 00:04:23.755 CC lib/virtio/virtio_vhost_user.o 00:04:23.755 CC lib/virtio/virtio_vfio_user.o 00:04:23.755 CC lib/virtio/virtio_pci.o 00:04:23.755 CC lib/init/rpc.o 00:04:23.755 CC lib/bdev/bdev_rpc.o 00:04:23.755 CC lib/bdev/bdev.o 00:04:23.755 CC lib/bdev/bdev_zone.o 00:04:24.013 LIB libspdk_init.a 00:04:24.013 CC lib/bdev/part.o 00:04:24.013 SO libspdk_init.so.6.0 00:04:24.013 CC lib/bdev/scsi_nvme.o 00:04:24.013 SYMLINK libspdk_init.so 00:04:24.013 LIB libspdk_virtio.a 00:04:24.013 LIB libspdk_fsdev.a 00:04:24.271 SO libspdk_virtio.so.7.0 00:04:24.271 SO libspdk_fsdev.so.1.0 00:04:24.271 SYMLINK libspdk_virtio.so 00:04:24.271 CC lib/event/app.o 00:04:24.271 CC lib/event/app_rpc.o 00:04:24.271 CC lib/event/log_rpc.o 00:04:24.271 CC lib/event/reactor.o 00:04:24.271 CC lib/event/scheduler_static.o 00:04:24.271 SYMLINK libspdk_fsdev.so 00:04:24.271 LIB libspdk_nvme.a 00:04:24.271 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:24.271 SO libspdk_nvme.so.14.0 00:04:24.529 SYMLINK libspdk_nvme.so 00:04:24.529 LIB libspdk_event.a 00:04:24.787 SO libspdk_event.so.14.0 00:04:24.787 SYMLINK libspdk_event.so 00:04:25.046 LIB libspdk_fuse_dispatcher.a 00:04:25.046 SO libspdk_fuse_dispatcher.so.1.0 00:04:25.046 SYMLINK libspdk_fuse_dispatcher.so 00:04:25.612 LIB libspdk_blob.a 00:04:25.870 SO libspdk_blob.so.11.0 00:04:25.871 SYMLINK libspdk_blob.so 00:04:26.128 CC lib/blobfs/blobfs.o 00:04:26.128 CC lib/blobfs/tree.o 00:04:26.128 CC lib/lvol/lvol.o 00:04:26.386 LIB libspdk_bdev.a 00:04:26.386 SO libspdk_bdev.so.16.0 00:04:26.669 SYMLINK libspdk_bdev.so 00:04:26.669 CC lib/nvmf/ctrlr.o 00:04:26.669 CC lib/nvmf/ctrlr_discovery.o 00:04:26.669 CC lib/nvmf/ctrlr_bdev.o 00:04:26.669 CC lib/nvmf/subsystem.o 00:04:26.669 CC lib/ublk/ublk.o 00:04:26.669 CC lib/ftl/ftl_core.o 00:04:26.669 CC lib/nbd/nbd.o 00:04:26.669 CC lib/scsi/dev.o 00:04:26.669 LIB libspdk_blobfs.a 00:04:26.669 SO libspdk_blobfs.so.10.0 00:04:27.045 SYMLINK libspdk_blobfs.so 00:04:27.045 CC lib/nbd/nbd_rpc.o 00:04:27.045 CC lib/scsi/lun.o 00:04:27.045 CC lib/scsi/port.o 00:04:27.045 CC lib/ftl/ftl_init.o 00:04:27.045 LIB libspdk_nbd.a 00:04:27.045 LIB libspdk_lvol.a 00:04:27.045 SO libspdk_nbd.so.7.0 00:04:27.045 CC lib/scsi/scsi.o 00:04:27.045 SO libspdk_lvol.so.10.0 00:04:27.045 SYMLINK libspdk_nbd.so 00:04:27.045 CC lib/ftl/ftl_layout.o 00:04:27.045 SYMLINK libspdk_lvol.so 00:04:27.045 CC lib/ftl/ftl_debug.o 00:04:27.045 CC lib/ftl/ftl_io.o 00:04:27.303 CC lib/ftl/ftl_sb.o 00:04:27.303 CC lib/ftl/ftl_l2p.o 00:04:27.303 CC lib/scsi/scsi_bdev.o 00:04:27.303 CC lib/ftl/ftl_l2p_flat.o 00:04:27.303 CC lib/ftl/ftl_nv_cache.o 00:04:27.303 CC lib/scsi/scsi_pr.o 00:04:27.303 CC lib/scsi/scsi_rpc.o 00:04:27.303 CC lib/ublk/ublk_rpc.o 00:04:27.303 CC lib/nvmf/nvmf.o 00:04:27.303 CC lib/nvmf/nvmf_rpc.o 00:04:27.560 CC lib/nvmf/transport.o 00:04:27.560 CC lib/ftl/ftl_band.o 00:04:27.560 LIB libspdk_ublk.a 00:04:27.560 SO libspdk_ublk.so.3.0 00:04:27.560 SYMLINK libspdk_ublk.so 00:04:27.560 CC lib/ftl/ftl_band_ops.o 00:04:27.560 CC lib/ftl/ftl_writer.o 00:04:27.818 CC lib/scsi/task.o 00:04:27.818 CC lib/ftl/ftl_rq.o 00:04:27.818 LIB libspdk_scsi.a 00:04:27.818 CC lib/nvmf/tcp.o 00:04:27.818 CC lib/ftl/ftl_reloc.o 00:04:27.818 CC lib/ftl/ftl_l2p_cache.o 00:04:27.818 SO libspdk_scsi.so.9.0 00:04:28.077 CC lib/nvmf/stubs.o 00:04:28.077 SYMLINK libspdk_scsi.so 00:04:28.077 CC lib/ftl/ftl_p2l.o 00:04:28.077 CC lib/nvmf/mdns_server.o 00:04:28.077 CC lib/ftl/ftl_p2l_log.o 00:04:28.077 CC lib/nvmf/rdma.o 00:04:28.335 CC lib/ftl/mngt/ftl_mngt.o 00:04:28.335 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:28.335 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:28.335 CC lib/nvmf/auth.o 00:04:28.335 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:28.335 CC lib/iscsi/conn.o 00:04:28.335 CC lib/iscsi/init_grp.o 00:04:28.335 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:28.593 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:28.593 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:28.593 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:28.593 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:28.593 CC lib/iscsi/iscsi.o 00:04:28.593 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:28.593 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:28.851 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:28.851 CC lib/vhost/vhost.o 00:04:28.851 CC lib/vhost/vhost_rpc.o 00:04:28.851 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:28.851 CC lib/ftl/utils/ftl_conf.o 00:04:29.110 CC lib/ftl/utils/ftl_md.o 00:04:29.110 CC lib/iscsi/param.o 00:04:29.110 CC lib/iscsi/portal_grp.o 00:04:29.110 CC lib/ftl/utils/ftl_mempool.o 00:04:29.110 CC lib/ftl/utils/ftl_bitmap.o 00:04:29.110 CC lib/ftl/utils/ftl_property.o 00:04:29.368 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:29.368 CC lib/vhost/vhost_scsi.o 00:04:29.368 CC lib/vhost/vhost_blk.o 00:04:29.368 CC lib/vhost/rte_vhost_user.o 00:04:29.368 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:29.368 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:29.368 CC lib/iscsi/tgt_node.o 00:04:29.368 CC lib/iscsi/iscsi_subsystem.o 00:04:29.368 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:29.368 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:29.626 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:29.627 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:29.627 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:29.627 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:29.627 CC lib/iscsi/iscsi_rpc.o 00:04:29.627 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:29.884 CC lib/iscsi/task.o 00:04:29.884 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:29.884 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:29.884 LIB libspdk_nvmf.a 00:04:29.884 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:29.884 CC lib/ftl/base/ftl_base_dev.o 00:04:29.884 CC lib/ftl/base/ftl_base_bdev.o 00:04:29.884 SO libspdk_nvmf.so.19.0 00:04:30.142 CC lib/ftl/ftl_trace.o 00:04:30.142 LIB libspdk_vhost.a 00:04:30.142 LIB libspdk_iscsi.a 00:04:30.142 SO libspdk_vhost.so.8.0 00:04:30.142 SO libspdk_iscsi.so.8.0 00:04:30.142 SYMLINK libspdk_vhost.so 00:04:30.142 SYMLINK libspdk_nvmf.so 00:04:30.142 LIB libspdk_ftl.a 00:04:30.400 SYMLINK libspdk_iscsi.so 00:04:30.400 SO libspdk_ftl.so.9.0 00:04:30.658 SYMLINK libspdk_ftl.so 00:04:30.915 CC module/env_dpdk/env_dpdk_rpc.o 00:04:30.915 CC module/accel/ioat/accel_ioat.o 00:04:30.915 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:30.915 CC module/accel/error/accel_error.o 00:04:30.915 CC module/scheduler/gscheduler/gscheduler.o 00:04:30.915 CC module/blob/bdev/blob_bdev.o 00:04:30.915 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:30.915 CC module/keyring/file/keyring.o 00:04:30.915 CC module/fsdev/aio/fsdev_aio.o 00:04:30.915 CC module/sock/posix/posix.o 00:04:30.915 LIB libspdk_env_dpdk_rpc.a 00:04:30.915 SO libspdk_env_dpdk_rpc.so.6.0 00:04:30.915 LIB libspdk_scheduler_dpdk_governor.a 00:04:30.915 SYMLINK libspdk_env_dpdk_rpc.so 00:04:30.915 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:30.915 LIB libspdk_scheduler_gscheduler.a 00:04:30.915 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:31.173 SO libspdk_scheduler_gscheduler.so.4.0 00:04:31.173 CC module/keyring/file/keyring_rpc.o 00:04:31.173 CC module/accel/error/accel_error_rpc.o 00:04:31.173 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:31.173 SYMLINK libspdk_scheduler_gscheduler.so 00:04:31.173 CC module/fsdev/aio/linux_aio_mgr.o 00:04:31.173 CC module/accel/ioat/accel_ioat_rpc.o 00:04:31.173 LIB libspdk_scheduler_dynamic.a 00:04:31.173 SO libspdk_scheduler_dynamic.so.4.0 00:04:31.173 LIB libspdk_keyring_file.a 00:04:31.173 SYMLINK libspdk_scheduler_dynamic.so 00:04:31.173 SO libspdk_keyring_file.so.2.0 00:04:31.173 LIB libspdk_blob_bdev.a 00:04:31.173 LIB libspdk_accel_error.a 00:04:31.173 CC module/keyring/linux/keyring.o 00:04:31.173 SO libspdk_blob_bdev.so.11.0 00:04:31.173 LIB libspdk_accel_ioat.a 00:04:31.173 SYMLINK libspdk_keyring_file.so 00:04:31.173 SO libspdk_accel_error.so.2.0 00:04:31.173 CC module/keyring/linux/keyring_rpc.o 00:04:31.173 SYMLINK libspdk_blob_bdev.so 00:04:31.173 SYMLINK libspdk_accel_error.so 00:04:31.173 SO libspdk_accel_ioat.so.6.0 00:04:31.431 CC module/accel/dsa/accel_dsa.o 00:04:31.432 CC module/accel/dsa/accel_dsa_rpc.o 00:04:31.432 SYMLINK libspdk_accel_ioat.so 00:04:31.432 CC module/accel/iaa/accel_iaa.o 00:04:31.432 CC module/accel/iaa/accel_iaa_rpc.o 00:04:31.432 LIB libspdk_keyring_linux.a 00:04:31.432 SO libspdk_keyring_linux.so.1.0 00:04:31.432 SYMLINK libspdk_keyring_linux.so 00:04:31.432 CC module/bdev/error/vbdev_error.o 00:04:31.432 CC module/bdev/delay/vbdev_delay.o 00:04:31.432 LIB libspdk_accel_iaa.a 00:04:31.432 SO libspdk_accel_iaa.so.3.0 00:04:31.432 CC module/blobfs/bdev/blobfs_bdev.o 00:04:31.432 LIB libspdk_fsdev_aio.a 00:04:31.432 CC module/bdev/gpt/gpt.o 00:04:31.432 SYMLINK libspdk_accel_iaa.so 00:04:31.432 CC module/bdev/gpt/vbdev_gpt.o 00:04:31.722 CC module/bdev/malloc/bdev_malloc.o 00:04:31.722 SO libspdk_fsdev_aio.so.1.0 00:04:31.722 CC module/bdev/lvol/vbdev_lvol.o 00:04:31.722 LIB libspdk_accel_dsa.a 00:04:31.722 SO libspdk_accel_dsa.so.5.0 00:04:31.722 SYMLINK libspdk_fsdev_aio.so 00:04:31.722 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:31.722 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:31.722 SYMLINK libspdk_accel_dsa.so 00:04:31.722 LIB libspdk_sock_posix.a 00:04:31.722 SO libspdk_sock_posix.so.6.0 00:04:31.722 CC module/bdev/error/vbdev_error_rpc.o 00:04:31.722 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:31.722 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:31.722 SYMLINK libspdk_sock_posix.so 00:04:31.722 LIB libspdk_blobfs_bdev.a 00:04:31.722 CC module/bdev/null/bdev_null.o 00:04:31.722 LIB libspdk_bdev_gpt.a 00:04:31.722 SO libspdk_blobfs_bdev.so.6.0 00:04:31.722 SO libspdk_bdev_gpt.so.6.0 00:04:31.980 LIB libspdk_bdev_error.a 00:04:31.980 SYMLINK libspdk_blobfs_bdev.so 00:04:31.980 CC module/bdev/nvme/bdev_nvme.o 00:04:31.980 LIB libspdk_bdev_delay.a 00:04:31.980 SO libspdk_bdev_error.so.6.0 00:04:31.980 SYMLINK libspdk_bdev_gpt.so 00:04:31.980 SO libspdk_bdev_delay.so.6.0 00:04:31.980 CC module/bdev/null/bdev_null_rpc.o 00:04:31.980 LIB libspdk_bdev_malloc.a 00:04:31.980 SYMLINK libspdk_bdev_error.so 00:04:31.980 CC module/bdev/passthru/vbdev_passthru.o 00:04:31.980 SO libspdk_bdev_malloc.so.6.0 00:04:31.980 SYMLINK libspdk_bdev_delay.so 00:04:31.980 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:31.980 CC module/bdev/raid/bdev_raid.o 00:04:31.980 SYMLINK libspdk_bdev_malloc.so 00:04:31.980 CC module/bdev/nvme/nvme_rpc.o 00:04:31.980 LIB libspdk_bdev_null.a 00:04:31.980 SO libspdk_bdev_null.so.6.0 00:04:31.980 CC module/bdev/split/vbdev_split.o 00:04:32.239 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:32.239 LIB libspdk_bdev_lvol.a 00:04:32.239 SYMLINK libspdk_bdev_null.so 00:04:32.239 CC module/bdev/split/vbdev_split_rpc.o 00:04:32.239 SO libspdk_bdev_lvol.so.6.0 00:04:32.239 CC module/bdev/xnvme/bdev_xnvme.o 00:04:32.239 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:32.239 SYMLINK libspdk_bdev_lvol.so 00:04:32.239 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:32.239 CC module/bdev/nvme/bdev_mdns_client.o 00:04:32.239 LIB libspdk_bdev_split.a 00:04:32.239 SO libspdk_bdev_split.so.6.0 00:04:32.239 CC module/bdev/nvme/vbdev_opal.o 00:04:32.239 LIB libspdk_bdev_passthru.a 00:04:32.498 SYMLINK libspdk_bdev_split.so 00:04:32.498 SO libspdk_bdev_passthru.so.6.0 00:04:32.498 CC module/bdev/aio/bdev_aio.o 00:04:32.498 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:32.498 LIB libspdk_bdev_xnvme.a 00:04:32.498 SYMLINK libspdk_bdev_passthru.so 00:04:32.498 SO libspdk_bdev_xnvme.so.3.0 00:04:32.498 CC module/bdev/ftl/bdev_ftl.o 00:04:32.498 CC module/bdev/iscsi/bdev_iscsi.o 00:04:32.498 SYMLINK libspdk_bdev_xnvme.so 00:04:32.498 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:32.498 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:32.498 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:32.498 LIB libspdk_bdev_zone_block.a 00:04:32.498 SO libspdk_bdev_zone_block.so.6.0 00:04:32.755 SYMLINK libspdk_bdev_zone_block.so 00:04:32.755 CC module/bdev/aio/bdev_aio_rpc.o 00:04:32.755 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:32.755 CC module/bdev/raid/bdev_raid_rpc.o 00:04:32.755 CC module/bdev/raid/bdev_raid_sb.o 00:04:32.755 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:32.755 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:32.755 LIB libspdk_bdev_iscsi.a 00:04:32.755 SO libspdk_bdev_iscsi.so.6.0 00:04:32.755 LIB libspdk_bdev_aio.a 00:04:32.755 LIB libspdk_bdev_ftl.a 00:04:32.755 SO libspdk_bdev_ftl.so.6.0 00:04:32.755 CC module/bdev/raid/raid0.o 00:04:32.755 SO libspdk_bdev_aio.so.6.0 00:04:32.755 SYMLINK libspdk_bdev_iscsi.so 00:04:32.755 CC module/bdev/raid/raid1.o 00:04:32.755 SYMLINK libspdk_bdev_ftl.so 00:04:32.755 CC module/bdev/raid/concat.o 00:04:32.755 SYMLINK libspdk_bdev_aio.so 00:04:33.013 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:33.013 LIB libspdk_bdev_raid.a 00:04:33.013 LIB libspdk_bdev_virtio.a 00:04:33.013 SO libspdk_bdev_raid.so.6.0 00:04:33.013 SO libspdk_bdev_virtio.so.6.0 00:04:33.318 SYMLINK libspdk_bdev_virtio.so 00:04:33.318 SYMLINK libspdk_bdev_raid.so 00:04:34.254 LIB libspdk_bdev_nvme.a 00:04:34.514 SO libspdk_bdev_nvme.so.7.0 00:04:34.514 SYMLINK libspdk_bdev_nvme.so 00:04:34.772 CC module/event/subsystems/keyring/keyring.o 00:04:34.772 CC module/event/subsystems/vmd/vmd.o 00:04:34.772 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:34.772 CC module/event/subsystems/iobuf/iobuf.o 00:04:34.772 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:34.772 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:34.772 CC module/event/subsystems/fsdev/fsdev.o 00:04:35.032 CC module/event/subsystems/sock/sock.o 00:04:35.032 CC module/event/subsystems/scheduler/scheduler.o 00:04:35.032 LIB libspdk_event_keyring.a 00:04:35.032 SO libspdk_event_keyring.so.1.0 00:04:35.032 LIB libspdk_event_vmd.a 00:04:35.032 LIB libspdk_event_vhost_blk.a 00:04:35.032 LIB libspdk_event_fsdev.a 00:04:35.032 LIB libspdk_event_scheduler.a 00:04:35.032 LIB libspdk_event_sock.a 00:04:35.032 SO libspdk_event_vhost_blk.so.3.0 00:04:35.032 SO libspdk_event_fsdev.so.1.0 00:04:35.032 LIB libspdk_event_iobuf.a 00:04:35.032 SO libspdk_event_vmd.so.6.0 00:04:35.032 SO libspdk_event_scheduler.so.4.0 00:04:35.032 SO libspdk_event_sock.so.5.0 00:04:35.032 SYMLINK libspdk_event_keyring.so 00:04:35.032 SO libspdk_event_iobuf.so.3.0 00:04:35.032 SYMLINK libspdk_event_vhost_blk.so 00:04:35.032 SYMLINK libspdk_event_vmd.so 00:04:35.032 SYMLINK libspdk_event_scheduler.so 00:04:35.032 SYMLINK libspdk_event_fsdev.so 00:04:35.032 SYMLINK libspdk_event_sock.so 00:04:35.032 SYMLINK libspdk_event_iobuf.so 00:04:35.290 CC module/event/subsystems/accel/accel.o 00:04:35.549 LIB libspdk_event_accel.a 00:04:35.549 SO libspdk_event_accel.so.6.0 00:04:35.549 SYMLINK libspdk_event_accel.so 00:04:35.808 CC module/event/subsystems/bdev/bdev.o 00:04:36.067 LIB libspdk_event_bdev.a 00:04:36.067 SO libspdk_event_bdev.so.6.0 00:04:36.067 SYMLINK libspdk_event_bdev.so 00:04:36.067 CC module/event/subsystems/scsi/scsi.o 00:04:36.067 CC module/event/subsystems/ublk/ublk.o 00:04:36.067 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:36.067 CC module/event/subsystems/nbd/nbd.o 00:04:36.067 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:36.325 LIB libspdk_event_nbd.a 00:04:36.326 LIB libspdk_event_scsi.a 00:04:36.326 LIB libspdk_event_ublk.a 00:04:36.326 SO libspdk_event_scsi.so.6.0 00:04:36.326 SO libspdk_event_nbd.so.6.0 00:04:36.326 SO libspdk_event_ublk.so.3.0 00:04:36.326 SYMLINK libspdk_event_nbd.so 00:04:36.326 SYMLINK libspdk_event_ublk.so 00:04:36.326 SYMLINK libspdk_event_scsi.so 00:04:36.326 LIB libspdk_event_nvmf.a 00:04:36.326 SO libspdk_event_nvmf.so.6.0 00:04:36.584 SYMLINK libspdk_event_nvmf.so 00:04:36.584 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:36.584 CC module/event/subsystems/iscsi/iscsi.o 00:04:36.584 LIB libspdk_event_vhost_scsi.a 00:04:36.584 SO libspdk_event_vhost_scsi.so.3.0 00:04:36.584 LIB libspdk_event_iscsi.a 00:04:36.845 SYMLINK libspdk_event_vhost_scsi.so 00:04:36.845 SO libspdk_event_iscsi.so.6.0 00:04:36.845 SYMLINK libspdk_event_iscsi.so 00:04:36.845 SO libspdk.so.6.0 00:04:36.845 SYMLINK libspdk.so 00:04:37.108 CC app/trace_record/trace_record.o 00:04:37.108 CXX app/trace/trace.o 00:04:37.108 CC app/spdk_lspci/spdk_lspci.o 00:04:37.108 CC app/iscsi_tgt/iscsi_tgt.o 00:04:37.108 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:37.108 CC app/nvmf_tgt/nvmf_main.o 00:04:37.108 CC app/spdk_tgt/spdk_tgt.o 00:04:37.108 CC examples/util/zipf/zipf.o 00:04:37.108 CC examples/ioat/perf/perf.o 00:04:37.108 CC test/thread/poller_perf/poller_perf.o 00:04:37.108 LINK spdk_lspci 00:04:37.370 LINK interrupt_tgt 00:04:37.370 LINK nvmf_tgt 00:04:37.370 LINK zipf 00:04:37.370 LINK poller_perf 00:04:37.370 LINK spdk_tgt 00:04:37.370 LINK iscsi_tgt 00:04:37.370 LINK spdk_trace_record 00:04:37.370 LINK ioat_perf 00:04:37.370 CC app/spdk_nvme_perf/perf.o 00:04:37.370 LINK spdk_trace 00:04:37.370 CC app/spdk_nvme_identify/identify.o 00:04:37.647 CC examples/ioat/verify/verify.o 00:04:37.647 CC app/spdk_nvme_discover/discovery_aer.o 00:04:37.647 CC app/spdk_top/spdk_top.o 00:04:37.647 CC test/dma/test_dma/test_dma.o 00:04:37.647 CC examples/sock/hello_world/hello_sock.o 00:04:37.647 CC examples/thread/thread/thread_ex.o 00:04:37.647 CC test/app/bdev_svc/bdev_svc.o 00:04:37.647 LINK verify 00:04:37.647 LINK spdk_nvme_discover 00:04:37.648 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:37.909 LINK bdev_svc 00:04:37.909 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:37.909 LINK hello_sock 00:04:37.909 LINK thread 00:04:37.909 TEST_HEADER include/spdk/accel.h 00:04:37.909 TEST_HEADER include/spdk/accel_module.h 00:04:37.909 TEST_HEADER include/spdk/assert.h 00:04:37.909 TEST_HEADER include/spdk/barrier.h 00:04:37.909 TEST_HEADER include/spdk/base64.h 00:04:37.909 TEST_HEADER include/spdk/bdev.h 00:04:37.909 TEST_HEADER include/spdk/bdev_module.h 00:04:37.909 TEST_HEADER include/spdk/bdev_zone.h 00:04:37.909 TEST_HEADER include/spdk/bit_array.h 00:04:37.909 TEST_HEADER include/spdk/bit_pool.h 00:04:37.909 TEST_HEADER include/spdk/blob_bdev.h 00:04:37.909 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:37.909 TEST_HEADER include/spdk/blobfs.h 00:04:37.909 TEST_HEADER include/spdk/blob.h 00:04:37.909 TEST_HEADER include/spdk/conf.h 00:04:37.909 TEST_HEADER include/spdk/config.h 00:04:37.909 TEST_HEADER include/spdk/cpuset.h 00:04:37.909 TEST_HEADER include/spdk/crc16.h 00:04:37.909 TEST_HEADER include/spdk/crc32.h 00:04:37.909 TEST_HEADER include/spdk/crc64.h 00:04:37.909 TEST_HEADER include/spdk/dif.h 00:04:37.909 TEST_HEADER include/spdk/dma.h 00:04:37.909 TEST_HEADER include/spdk/endian.h 00:04:37.909 TEST_HEADER include/spdk/env_dpdk.h 00:04:37.909 TEST_HEADER include/spdk/env.h 00:04:37.909 TEST_HEADER include/spdk/event.h 00:04:37.909 TEST_HEADER include/spdk/fd_group.h 00:04:37.909 TEST_HEADER include/spdk/fd.h 00:04:37.909 TEST_HEADER include/spdk/file.h 00:04:37.909 TEST_HEADER include/spdk/fsdev.h 00:04:37.909 TEST_HEADER include/spdk/fsdev_module.h 00:04:37.909 TEST_HEADER include/spdk/ftl.h 00:04:37.909 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:37.909 TEST_HEADER include/spdk/gpt_spec.h 00:04:37.909 TEST_HEADER include/spdk/hexlify.h 00:04:37.909 TEST_HEADER include/spdk/histogram_data.h 00:04:37.909 TEST_HEADER include/spdk/idxd.h 00:04:37.909 TEST_HEADER include/spdk/idxd_spec.h 00:04:37.909 TEST_HEADER include/spdk/init.h 00:04:37.909 TEST_HEADER include/spdk/ioat.h 00:04:37.909 TEST_HEADER include/spdk/ioat_spec.h 00:04:37.909 TEST_HEADER include/spdk/iscsi_spec.h 00:04:37.909 TEST_HEADER include/spdk/json.h 00:04:37.909 LINK test_dma 00:04:37.909 TEST_HEADER include/spdk/jsonrpc.h 00:04:37.909 TEST_HEADER include/spdk/keyring.h 00:04:37.909 TEST_HEADER include/spdk/keyring_module.h 00:04:37.909 TEST_HEADER include/spdk/likely.h 00:04:37.909 TEST_HEADER include/spdk/log.h 00:04:37.909 TEST_HEADER include/spdk/lvol.h 00:04:37.909 TEST_HEADER include/spdk/md5.h 00:04:37.909 TEST_HEADER include/spdk/memory.h 00:04:37.909 TEST_HEADER include/spdk/mmio.h 00:04:37.909 TEST_HEADER include/spdk/nbd.h 00:04:37.909 TEST_HEADER include/spdk/net.h 00:04:37.909 TEST_HEADER include/spdk/notify.h 00:04:37.909 TEST_HEADER include/spdk/nvme.h 00:04:37.909 TEST_HEADER include/spdk/nvme_intel.h 00:04:37.909 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:37.909 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:37.909 TEST_HEADER include/spdk/nvme_spec.h 00:04:37.909 TEST_HEADER include/spdk/nvme_zns.h 00:04:37.909 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:37.909 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:37.909 TEST_HEADER include/spdk/nvmf.h 00:04:37.909 TEST_HEADER include/spdk/nvmf_spec.h 00:04:37.909 TEST_HEADER include/spdk/nvmf_transport.h 00:04:37.909 TEST_HEADER include/spdk/opal.h 00:04:37.909 TEST_HEADER include/spdk/opal_spec.h 00:04:37.909 TEST_HEADER include/spdk/pci_ids.h 00:04:37.909 TEST_HEADER include/spdk/pipe.h 00:04:37.909 TEST_HEADER include/spdk/queue.h 00:04:37.909 TEST_HEADER include/spdk/reduce.h 00:04:37.909 TEST_HEADER include/spdk/rpc.h 00:04:37.909 TEST_HEADER include/spdk/scheduler.h 00:04:37.909 TEST_HEADER include/spdk/scsi.h 00:04:37.909 TEST_HEADER include/spdk/scsi_spec.h 00:04:37.909 TEST_HEADER include/spdk/sock.h 00:04:37.909 TEST_HEADER include/spdk/stdinc.h 00:04:37.909 TEST_HEADER include/spdk/string.h 00:04:37.909 TEST_HEADER include/spdk/thread.h 00:04:37.909 TEST_HEADER include/spdk/trace.h 00:04:37.909 TEST_HEADER include/spdk/trace_parser.h 00:04:37.909 TEST_HEADER include/spdk/tree.h 00:04:37.909 TEST_HEADER include/spdk/ublk.h 00:04:37.909 TEST_HEADER include/spdk/util.h 00:04:37.909 TEST_HEADER include/spdk/uuid.h 00:04:37.910 TEST_HEADER include/spdk/version.h 00:04:38.170 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:38.170 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:38.170 TEST_HEADER include/spdk/vhost.h 00:04:38.170 TEST_HEADER include/spdk/vmd.h 00:04:38.170 TEST_HEADER include/spdk/xor.h 00:04:38.170 TEST_HEADER include/spdk/zipf.h 00:04:38.170 CXX test/cpp_headers/accel.o 00:04:38.170 LINK nvme_fuzz 00:04:38.170 CC test/env/mem_callbacks/mem_callbacks.o 00:04:38.170 CC test/event/event_perf/event_perf.o 00:04:38.170 CC examples/vmd/lsvmd/lsvmd.o 00:04:38.170 CXX test/cpp_headers/accel_module.o 00:04:38.170 CC test/event/reactor/reactor.o 00:04:38.170 LINK spdk_nvme_perf 00:04:38.170 LINK lsvmd 00:04:38.170 LINK event_perf 00:04:38.428 LINK spdk_nvme_identify 00:04:38.428 CXX test/cpp_headers/assert.o 00:04:38.428 CC test/app/histogram_perf/histogram_perf.o 00:04:38.428 LINK reactor 00:04:38.428 CXX test/cpp_headers/barrier.o 00:04:38.428 LINK spdk_top 00:04:38.428 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:38.429 LINK histogram_perf 00:04:38.429 CC test/event/reactor_perf/reactor_perf.o 00:04:38.429 CC examples/vmd/led/led.o 00:04:38.429 CC test/event/app_repeat/app_repeat.o 00:04:38.429 CXX test/cpp_headers/base64.o 00:04:38.687 CC test/env/vtophys/vtophys.o 00:04:38.687 CXX test/cpp_headers/bdev.o 00:04:38.687 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:38.687 LINK reactor_perf 00:04:38.687 LINK led 00:04:38.687 LINK mem_callbacks 00:04:38.687 LINK app_repeat 00:04:38.687 LINK vtophys 00:04:38.687 CXX test/cpp_headers/bdev_module.o 00:04:38.687 CC app/spdk_dd/spdk_dd.o 00:04:38.687 CXX test/cpp_headers/bdev_zone.o 00:04:38.687 CXX test/cpp_headers/bit_array.o 00:04:38.687 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:38.945 CXX test/cpp_headers/bit_pool.o 00:04:38.945 LINK env_dpdk_post_init 00:04:38.945 CC examples/idxd/perf/perf.o 00:04:38.945 CC test/event/scheduler/scheduler.o 00:04:38.945 CC test/env/memory/memory_ut.o 00:04:38.945 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:38.945 LINK vhost_fuzz 00:04:38.945 CXX test/cpp_headers/blob_bdev.o 00:04:38.945 CC examples/accel/perf/accel_perf.o 00:04:38.945 LINK spdk_dd 00:04:39.203 LINK scheduler 00:04:39.203 CXX test/cpp_headers/blobfs_bdev.o 00:04:39.203 CC examples/blob/hello_world/hello_blob.o 00:04:39.203 LINK idxd_perf 00:04:39.203 LINK hello_fsdev 00:04:39.203 CC examples/nvme/hello_world/hello_world.o 00:04:39.203 CXX test/cpp_headers/blobfs.o 00:04:39.203 LINK iscsi_fuzz 00:04:39.461 CC app/fio/nvme/fio_plugin.o 00:04:39.461 CXX test/cpp_headers/blob.o 00:04:39.461 CC test/app/jsoncat/jsoncat.o 00:04:39.461 CC test/app/stub/stub.o 00:04:39.461 LINK hello_blob 00:04:39.461 CC test/env/pci/pci_ut.o 00:04:39.461 LINK hello_world 00:04:39.461 CC examples/nvme/reconnect/reconnect.o 00:04:39.461 CXX test/cpp_headers/conf.o 00:04:39.461 LINK accel_perf 00:04:39.461 LINK jsoncat 00:04:39.720 LINK stub 00:04:39.720 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:39.720 CXX test/cpp_headers/config.o 00:04:39.720 CXX test/cpp_headers/cpuset.o 00:04:39.720 CC examples/blob/cli/blobcli.o 00:04:39.720 CXX test/cpp_headers/crc16.o 00:04:39.720 CC examples/nvme/arbitration/arbitration.o 00:04:39.720 CC app/fio/bdev/fio_plugin.o 00:04:39.720 LINK memory_ut 00:04:39.978 LINK pci_ut 00:04:39.978 LINK reconnect 00:04:39.978 LINK spdk_nvme 00:04:39.978 CXX test/cpp_headers/crc32.o 00:04:39.978 CXX test/cpp_headers/crc64.o 00:04:39.978 CXX test/cpp_headers/dif.o 00:04:39.978 CC examples/bdev/hello_world/hello_bdev.o 00:04:39.978 CC examples/bdev/bdevperf/bdevperf.o 00:04:39.978 LINK arbitration 00:04:40.237 CC examples/nvme/hotplug/hotplug.o 00:04:40.237 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:40.237 LINK spdk_bdev 00:04:40.237 CXX test/cpp_headers/dma.o 00:04:40.237 LINK nvme_manage 00:04:40.237 CC examples/nvme/abort/abort.o 00:04:40.237 LINK blobcli 00:04:40.237 LINK hello_bdev 00:04:40.237 CXX test/cpp_headers/endian.o 00:04:40.237 CXX test/cpp_headers/env_dpdk.o 00:04:40.237 LINK cmb_copy 00:04:40.237 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:40.237 CC app/vhost/vhost.o 00:04:40.237 LINK hotplug 00:04:40.237 CXX test/cpp_headers/env.o 00:04:40.496 CXX test/cpp_headers/event.o 00:04:40.496 CXX test/cpp_headers/fd_group.o 00:04:40.496 CXX test/cpp_headers/fd.o 00:04:40.496 LINK vhost 00:04:40.496 CXX test/cpp_headers/file.o 00:04:40.496 LINK pmr_persistence 00:04:40.496 CXX test/cpp_headers/fsdev.o 00:04:40.496 CC test/rpc_client/rpc_client_test.o 00:04:40.496 CXX test/cpp_headers/fsdev_module.o 00:04:40.496 LINK abort 00:04:40.496 CXX test/cpp_headers/ftl.o 00:04:40.496 CXX test/cpp_headers/fuse_dispatcher.o 00:04:40.754 CXX test/cpp_headers/gpt_spec.o 00:04:40.754 CXX test/cpp_headers/hexlify.o 00:04:40.754 LINK rpc_client_test 00:04:40.754 CXX test/cpp_headers/histogram_data.o 00:04:40.754 CXX test/cpp_headers/idxd.o 00:04:40.754 CC test/accel/dif/dif.o 00:04:40.754 CXX test/cpp_headers/idxd_spec.o 00:04:40.754 CC test/blobfs/mkfs/mkfs.o 00:04:40.754 CXX test/cpp_headers/init.o 00:04:40.754 CXX test/cpp_headers/ioat.o 00:04:40.754 CC test/lvol/esnap/esnap.o 00:04:40.754 CXX test/cpp_headers/ioat_spec.o 00:04:40.754 LINK bdevperf 00:04:41.012 CC test/nvme/aer/aer.o 00:04:41.012 LINK mkfs 00:04:41.012 CXX test/cpp_headers/iscsi_spec.o 00:04:41.012 CXX test/cpp_headers/json.o 00:04:41.012 CC test/nvme/reset/reset.o 00:04:41.012 CC test/nvme/sgl/sgl.o 00:04:41.012 CC test/nvme/e2edp/nvme_dp.o 00:04:41.012 CXX test/cpp_headers/jsonrpc.o 00:04:41.269 CC test/nvme/err_injection/err_injection.o 00:04:41.269 CC test/nvme/overhead/overhead.o 00:04:41.269 LINK reset 00:04:41.269 LINK aer 00:04:41.269 CC examples/nvmf/nvmf/nvmf.o 00:04:41.269 CXX test/cpp_headers/keyring.o 00:04:41.269 LINK sgl 00:04:41.269 LINK err_injection 00:04:41.269 LINK nvme_dp 00:04:41.269 CXX test/cpp_headers/keyring_module.o 00:04:41.527 CC test/nvme/reserve/reserve.o 00:04:41.527 CC test/nvme/startup/startup.o 00:04:41.527 LINK dif 00:04:41.527 CC test/nvme/simple_copy/simple_copy.o 00:04:41.527 LINK overhead 00:04:41.527 LINK nvmf 00:04:41.527 CXX test/cpp_headers/likely.o 00:04:41.527 CC test/nvme/connect_stress/connect_stress.o 00:04:41.527 CC test/nvme/boot_partition/boot_partition.o 00:04:41.527 LINK startup 00:04:41.527 LINK reserve 00:04:41.527 CXX test/cpp_headers/log.o 00:04:41.785 CXX test/cpp_headers/lvol.o 00:04:41.785 LINK simple_copy 00:04:41.786 CC test/nvme/compliance/nvme_compliance.o 00:04:41.786 CC test/nvme/fused_ordering/fused_ordering.o 00:04:41.786 LINK boot_partition 00:04:41.786 LINK connect_stress 00:04:41.786 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:41.786 CXX test/cpp_headers/md5.o 00:04:41.786 CC test/nvme/fdp/fdp.o 00:04:41.786 CC test/nvme/cuse/cuse.o 00:04:41.786 CXX test/cpp_headers/memory.o 00:04:41.786 LINK fused_ordering 00:04:41.786 CXX test/cpp_headers/mmio.o 00:04:42.044 LINK doorbell_aers 00:04:42.044 CC test/bdev/bdevio/bdevio.o 00:04:42.044 CXX test/cpp_headers/nbd.o 00:04:42.044 CXX test/cpp_headers/net.o 00:04:42.044 LINK nvme_compliance 00:04:42.044 CXX test/cpp_headers/notify.o 00:04:42.044 CXX test/cpp_headers/nvme.o 00:04:42.044 CXX test/cpp_headers/nvme_intel.o 00:04:42.044 CXX test/cpp_headers/nvme_ocssd.o 00:04:42.044 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:42.044 LINK fdp 00:04:42.044 CXX test/cpp_headers/nvme_spec.o 00:04:42.303 CXX test/cpp_headers/nvme_zns.o 00:04:42.303 CXX test/cpp_headers/nvmf_cmd.o 00:04:42.303 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:42.303 CXX test/cpp_headers/nvmf.o 00:04:42.303 CXX test/cpp_headers/nvmf_spec.o 00:04:42.303 CXX test/cpp_headers/nvmf_transport.o 00:04:42.303 CXX test/cpp_headers/opal.o 00:04:42.303 CXX test/cpp_headers/opal_spec.o 00:04:42.303 CXX test/cpp_headers/pci_ids.o 00:04:42.303 CXX test/cpp_headers/pipe.o 00:04:42.303 LINK bdevio 00:04:42.303 CXX test/cpp_headers/queue.o 00:04:42.303 CXX test/cpp_headers/reduce.o 00:04:42.303 CXX test/cpp_headers/rpc.o 00:04:42.562 CXX test/cpp_headers/scheduler.o 00:04:42.562 CXX test/cpp_headers/scsi.o 00:04:42.562 CXX test/cpp_headers/scsi_spec.o 00:04:42.562 CXX test/cpp_headers/sock.o 00:04:42.562 CXX test/cpp_headers/stdinc.o 00:04:42.562 CXX test/cpp_headers/string.o 00:04:42.562 CXX test/cpp_headers/thread.o 00:04:42.562 CXX test/cpp_headers/trace.o 00:04:42.562 CXX test/cpp_headers/trace_parser.o 00:04:42.562 CXX test/cpp_headers/tree.o 00:04:42.562 CXX test/cpp_headers/ublk.o 00:04:42.562 CXX test/cpp_headers/util.o 00:04:42.562 CXX test/cpp_headers/uuid.o 00:04:42.562 CXX test/cpp_headers/version.o 00:04:42.562 CXX test/cpp_headers/vfio_user_pci.o 00:04:42.562 CXX test/cpp_headers/vfio_user_spec.o 00:04:42.820 CXX test/cpp_headers/vhost.o 00:04:42.820 CXX test/cpp_headers/vmd.o 00:04:42.821 CXX test/cpp_headers/xor.o 00:04:42.821 CXX test/cpp_headers/zipf.o 00:04:43.081 LINK cuse 00:04:45.638 LINK esnap 00:04:45.638 00:04:45.638 real 1m6.839s 00:04:45.638 user 5m26.971s 00:04:45.638 sys 0m56.075s 00:04:45.638 08:48:39 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:45.638 ************************************ 00:04:45.638 END TEST make 00:04:45.638 ************************************ 00:04:45.638 08:48:39 make -- common/autotest_common.sh@10 -- $ set +x 00:04:45.638 08:48:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:45.638 08:48:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:45.638 08:48:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:45.638 08:48:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.638 08:48:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:45.638 08:48:39 -- pm/common@44 -- $ pid=5802 00:04:45.638 08:48:39 -- pm/common@50 -- $ kill -TERM 5802 00:04:45.638 08:48:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.638 08:48:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:45.638 08:48:39 -- pm/common@44 -- $ pid=5803 00:04:45.638 08:48:39 -- pm/common@50 -- $ kill -TERM 5803 00:04:45.638 08:48:39 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:45.638 08:48:39 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:45.638 08:48:39 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:45.638 08:48:39 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:45.638 08:48:39 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.638 08:48:39 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.638 08:48:39 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.638 08:48:39 -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.638 08:48:39 -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.639 08:48:39 -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.639 08:48:39 -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.639 08:48:39 -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.639 08:48:39 -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.639 08:48:39 -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.639 08:48:39 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.639 08:48:39 -- scripts/common.sh@344 -- # case "$op" in 00:04:45.639 08:48:39 -- scripts/common.sh@345 -- # : 1 00:04:45.639 08:48:39 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.639 08:48:39 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.639 08:48:39 -- scripts/common.sh@365 -- # decimal 1 00:04:45.639 08:48:39 -- scripts/common.sh@353 -- # local d=1 00:04:45.639 08:48:39 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.639 08:48:39 -- scripts/common.sh@355 -- # echo 1 00:04:45.639 08:48:39 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.639 08:48:39 -- scripts/common.sh@366 -- # decimal 2 00:04:45.639 08:48:39 -- scripts/common.sh@353 -- # local d=2 00:04:45.639 08:48:39 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.639 08:48:39 -- scripts/common.sh@355 -- # echo 2 00:04:45.639 08:48:39 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.639 08:48:39 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.639 08:48:39 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.639 08:48:39 -- scripts/common.sh@368 -- # return 0 00:04:45.639 08:48:39 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.639 08:48:39 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:45.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.639 --rc genhtml_branch_coverage=1 00:04:45.639 --rc genhtml_function_coverage=1 00:04:45.639 --rc genhtml_legend=1 00:04:45.639 --rc geninfo_all_blocks=1 00:04:45.639 --rc geninfo_unexecuted_blocks=1 00:04:45.639 00:04:45.639 ' 00:04:45.639 08:48:39 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:45.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.639 --rc genhtml_branch_coverage=1 00:04:45.639 --rc genhtml_function_coverage=1 00:04:45.639 --rc genhtml_legend=1 00:04:45.639 --rc geninfo_all_blocks=1 00:04:45.639 --rc geninfo_unexecuted_blocks=1 00:04:45.639 00:04:45.639 ' 00:04:45.639 08:48:39 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:45.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.639 --rc genhtml_branch_coverage=1 00:04:45.639 --rc genhtml_function_coverage=1 00:04:45.639 --rc genhtml_legend=1 00:04:45.639 --rc geninfo_all_blocks=1 00:04:45.639 --rc geninfo_unexecuted_blocks=1 00:04:45.639 00:04:45.639 ' 00:04:45.639 08:48:39 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:45.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.639 --rc genhtml_branch_coverage=1 00:04:45.639 --rc genhtml_function_coverage=1 00:04:45.639 --rc genhtml_legend=1 00:04:45.639 --rc geninfo_all_blocks=1 00:04:45.639 --rc geninfo_unexecuted_blocks=1 00:04:45.639 00:04:45.639 ' 00:04:45.639 08:48:39 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:45.639 08:48:39 -- nvmf/common.sh@7 -- # uname -s 00:04:45.639 08:48:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.639 08:48:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.639 08:48:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.639 08:48:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.639 08:48:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.639 08:48:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.639 08:48:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.639 08:48:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.639 08:48:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.639 08:48:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.639 08:48:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:189c006a-5f8e-491e-b60d-b5b66b03007e 00:04:45.639 08:48:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=189c006a-5f8e-491e-b60d-b5b66b03007e 00:04:45.639 08:48:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.639 08:48:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.639 08:48:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.639 08:48:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.639 08:48:39 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:45.639 08:48:39 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:45.639 08:48:39 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.639 08:48:39 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.639 08:48:39 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.639 08:48:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.639 08:48:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.639 08:48:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.639 08:48:39 -- paths/export.sh@5 -- # export PATH 00:04:45.639 08:48:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.639 08:48:39 -- nvmf/common.sh@51 -- # : 0 00:04:45.639 08:48:39 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:45.639 08:48:39 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:45.639 08:48:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.639 08:48:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.639 08:48:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.639 08:48:39 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:45.639 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:45.639 08:48:39 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:45.639 08:48:39 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:45.639 08:48:39 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:45.639 08:48:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:45.639 08:48:39 -- spdk/autotest.sh@32 -- # uname -s 00:04:45.639 08:48:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:45.639 08:48:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:45.639 08:48:39 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:45.639 08:48:39 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:45.639 08:48:39 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:45.639 08:48:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:45.639 08:48:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:45.639 08:48:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:45.639 08:48:39 -- spdk/autotest.sh@48 -- # udevadm_pid=67051 00:04:45.639 08:48:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:45.639 08:48:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:45.639 08:48:39 -- pm/common@17 -- # local monitor 00:04:45.639 08:48:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.639 08:48:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.639 08:48:39 -- pm/common@25 -- # sleep 1 00:04:45.639 08:48:39 -- pm/common@21 -- # date +%s 00:04:45.639 08:48:39 -- pm/common@21 -- # date +%s 00:04:45.639 08:48:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732783719 00:04:45.639 08:48:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732783719 00:04:45.639 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732783719_collect-cpu-load.pm.log 00:04:45.639 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732783719_collect-vmstat.pm.log 00:04:47.021 08:48:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:47.021 08:48:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:47.021 08:48:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:47.021 08:48:40 -- common/autotest_common.sh@10 -- # set +x 00:04:47.021 08:48:40 -- spdk/autotest.sh@59 -- # create_test_list 00:04:47.021 08:48:40 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:47.021 08:48:40 -- common/autotest_common.sh@10 -- # set +x 00:04:47.021 08:48:40 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:47.021 08:48:40 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:47.021 08:48:40 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:47.021 08:48:40 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:47.021 08:48:40 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:47.021 08:48:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:47.021 08:48:40 -- common/autotest_common.sh@1455 -- # uname 00:04:47.021 08:48:40 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:47.021 08:48:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:47.021 08:48:40 -- common/autotest_common.sh@1475 -- # uname 00:04:47.021 08:48:40 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:47.021 08:48:40 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:47.021 08:48:40 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:47.021 lcov: LCOV version 1.15 00:04:47.021 08:48:40 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:01.932 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:01.932 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:16.837 08:49:10 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:16.837 08:49:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.837 08:49:10 -- common/autotest_common.sh@10 -- # set +x 00:05:16.837 08:49:10 -- spdk/autotest.sh@78 -- # rm -f 00:05:16.837 08:49:10 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.837 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.410 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:17.410 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:17.410 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:17.410 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:17.410 08:49:11 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:17.410 08:49:11 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:17.410 08:49:11 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:17.410 08:49:11 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:17.410 08:49:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:17.410 08:49:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:17.410 08:49:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:17.410 08:49:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:17.410 08:49:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:17.410 08:49:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:17.410 08:49:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:17.410 08:49:11 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:17.410 08:49:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:17.410 08:49:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:17.410 08:49:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:17.410 08:49:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:17.410 08:49:11 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:17.410 08:49:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:17.410 08:49:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:17.410 08:49:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:17.410 08:49:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:17.410 08:49:11 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:17.410 08:49:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:17.410 08:49:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:17.410 08:49:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:17.410 08:49:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:17.410 08:49:11 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:17.410 08:49:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:17.410 08:49:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:17.410 08:49:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:17.410 08:49:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:17.410 08:49:11 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:17.410 08:49:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:17.410 08:49:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:17.410 08:49:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:17.410 08:49:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:17.410 08:49:11 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:17.410 08:49:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:17.410 08:49:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:17.410 08:49:11 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:17.410 08:49:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.410 08:49:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:17.410 08:49:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:17.410 08:49:11 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:17.410 08:49:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:17.410 No valid GPT data, bailing 00:05:17.672 08:49:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:17.672 08:49:11 -- scripts/common.sh@394 -- # pt= 00:05:17.672 08:49:11 -- scripts/common.sh@395 -- # return 1 00:05:17.672 08:49:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:17.672 1+0 records in 00:05:17.672 1+0 records out 00:05:17.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259053 s, 40.5 MB/s 00:05:17.672 08:49:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.672 08:49:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:17.672 08:49:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:17.672 08:49:11 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:17.672 08:49:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:17.672 No valid GPT data, bailing 00:05:17.672 08:49:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:17.672 08:49:11 -- scripts/common.sh@394 -- # pt= 00:05:17.672 08:49:11 -- scripts/common.sh@395 -- # return 1 00:05:17.672 08:49:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:17.672 1+0 records in 00:05:17.672 1+0 records out 00:05:17.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00685681 s, 153 MB/s 00:05:17.672 08:49:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.672 08:49:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:17.672 08:49:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:17.672 08:49:11 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:17.672 08:49:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:17.672 No valid GPT data, bailing 00:05:17.672 08:49:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:17.672 08:49:11 -- scripts/common.sh@394 -- # pt= 00:05:17.672 08:49:11 -- scripts/common.sh@395 -- # return 1 00:05:17.672 08:49:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:17.672 1+0 records in 00:05:17.672 1+0 records out 00:05:17.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00520483 s, 201 MB/s 00:05:17.672 08:49:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.672 08:49:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:17.672 08:49:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:17.672 08:49:11 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:17.672 08:49:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:17.934 No valid GPT data, bailing 00:05:17.934 08:49:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:17.934 08:49:11 -- scripts/common.sh@394 -- # pt= 00:05:17.934 08:49:11 -- scripts/common.sh@395 -- # return 1 00:05:17.934 08:49:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:17.934 1+0 records in 00:05:17.934 1+0 records out 00:05:17.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00599973 s, 175 MB/s 00:05:17.934 08:49:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.934 08:49:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:17.934 08:49:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:17.934 08:49:11 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:17.934 08:49:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:17.934 No valid GPT data, bailing 00:05:17.934 08:49:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:17.934 08:49:11 -- scripts/common.sh@394 -- # pt= 00:05:17.934 08:49:11 -- scripts/common.sh@395 -- # return 1 00:05:17.934 08:49:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:17.934 1+0 records in 00:05:17.934 1+0 records out 00:05:17.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00667034 s, 157 MB/s 00:05:17.934 08:49:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.934 08:49:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:17.934 08:49:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:17.934 08:49:11 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:17.934 08:49:11 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:17.934 No valid GPT data, bailing 00:05:17.934 08:49:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:17.934 08:49:12 -- scripts/common.sh@394 -- # pt= 00:05:17.934 08:49:12 -- scripts/common.sh@395 -- # return 1 00:05:17.934 08:49:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:17.934 1+0 records in 00:05:17.934 1+0 records out 00:05:17.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0065012 s, 161 MB/s 00:05:17.934 08:49:12 -- spdk/autotest.sh@105 -- # sync 00:05:18.194 08:49:12 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:18.194 08:49:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:18.194 08:49:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:20.107 08:49:13 -- spdk/autotest.sh@111 -- # uname -s 00:05:20.107 08:49:13 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:20.107 08:49:13 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:20.107 08:49:13 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:20.107 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.678 Hugepages 00:05:20.678 node hugesize free / total 00:05:20.678 node0 1048576kB 0 / 0 00:05:20.678 node0 2048kB 0 / 0 00:05:20.678 00:05:20.678 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:20.678 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:20.939 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:20.939 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:20.939 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:20.939 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:20.939 08:49:15 -- spdk/autotest.sh@117 -- # uname -s 00:05:20.939 08:49:15 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:20.939 08:49:15 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:20.940 08:49:15 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.081 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:22.081 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:22.081 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:22.081 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:22.081 08:49:16 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:23.468 08:49:17 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:23.468 08:49:17 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:23.468 08:49:17 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:23.468 08:49:17 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:23.468 08:49:17 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:23.468 08:49:17 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:23.468 08:49:17 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.468 08:49:17 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:23.468 08:49:17 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:23.468 08:49:17 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:23.468 08:49:17 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:23.468 08:49:17 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.468 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.729 Waiting for block devices as requested 00:05:23.729 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:23.729 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:23.990 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:23.990 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:29.287 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:29.287 08:49:23 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:29.287 08:49:23 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:29.287 08:49:23 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:29.287 08:49:23 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:29.287 08:49:23 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:29.287 08:49:23 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:29.287 08:49:23 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:29.287 08:49:23 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:29.287 08:49:23 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:29.287 08:49:23 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:29.287 08:49:23 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:29.287 08:49:23 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1541 -- # continue 00:05:29.287 08:49:23 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:29.287 08:49:23 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:29.287 08:49:23 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:29.287 08:49:23 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:29.287 08:49:23 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:29.287 08:49:23 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:29.287 08:49:23 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:29.287 08:49:23 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:29.287 08:49:23 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:29.287 08:49:23 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:29.287 08:49:23 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:29.287 08:49:23 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1541 -- # continue 00:05:29.287 08:49:23 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:29.287 08:49:23 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:29.287 08:49:23 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:29.287 08:49:23 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:29.287 08:49:23 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:29.287 08:49:23 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:29.287 08:49:23 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:29.287 08:49:23 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:29.287 08:49:23 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:29.287 08:49:23 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:29.287 08:49:23 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:29.287 08:49:23 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1541 -- # continue 00:05:29.287 08:49:23 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:29.287 08:49:23 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:29.287 08:49:23 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:29.287 08:49:23 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:29.287 08:49:23 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:29.287 08:49:23 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:29.287 08:49:23 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:29.287 08:49:23 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:29.287 08:49:23 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:29.287 08:49:23 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:29.287 08:49:23 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:29.287 08:49:23 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:29.287 08:49:23 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:29.287 08:49:23 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:29.287 08:49:23 -- common/autotest_common.sh@1541 -- # continue 00:05:29.287 08:49:23 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:29.287 08:49:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:29.287 08:49:23 -- common/autotest_common.sh@10 -- # set +x 00:05:29.287 08:49:23 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:29.287 08:49:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.287 08:49:23 -- common/autotest_common.sh@10 -- # set +x 00:05:29.287 08:49:23 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.428 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.428 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.428 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.428 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.428 08:49:24 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:30.428 08:49:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.428 08:49:24 -- common/autotest_common.sh@10 -- # set +x 00:05:30.428 08:49:24 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:30.428 08:49:24 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:30.428 08:49:24 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:30.428 08:49:24 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:30.428 08:49:24 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:30.428 08:49:24 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:30.428 08:49:24 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:30.428 08:49:24 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:30.428 08:49:24 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:30.428 08:49:24 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:30.428 08:49:24 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:30.428 08:49:24 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:30.428 08:49:24 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:30.689 08:49:24 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:30.689 08:49:24 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:30.689 08:49:24 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:30.689 08:49:24 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:30.689 08:49:24 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:30.689 08:49:24 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:30.689 08:49:24 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:30.689 08:49:24 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:30.689 08:49:24 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:30.689 08:49:24 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:30.689 08:49:24 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:30.689 08:49:24 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:30.689 08:49:24 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:30.689 08:49:24 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:30.689 08:49:24 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:30.689 08:49:24 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:30.689 08:49:24 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:30.689 08:49:24 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:30.689 08:49:24 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:30.689 08:49:24 -- common/autotest_common.sh@1570 -- # return 0 00:05:30.689 08:49:24 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:30.689 08:49:24 -- common/autotest_common.sh@1578 -- # return 0 00:05:30.689 08:49:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:30.689 08:49:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:30.689 08:49:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:30.689 08:49:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:30.689 08:49:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:30.689 08:49:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:30.689 08:49:24 -- common/autotest_common.sh@10 -- # set +x 00:05:30.689 08:49:24 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:30.689 08:49:24 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:30.689 08:49:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.689 08:49:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.689 08:49:24 -- common/autotest_common.sh@10 -- # set +x 00:05:30.689 ************************************ 00:05:30.689 START TEST env 00:05:30.689 ************************************ 00:05:30.689 08:49:24 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:30.689 * Looking for test storage... 00:05:30.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:30.689 08:49:24 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:30.689 08:49:24 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:30.689 08:49:24 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:30.689 08:49:24 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:30.689 08:49:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.689 08:49:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.689 08:49:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.689 08:49:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.689 08:49:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.689 08:49:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.689 08:49:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.689 08:49:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.689 08:49:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.689 08:49:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.689 08:49:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.689 08:49:24 env -- scripts/common.sh@344 -- # case "$op" in 00:05:30.689 08:49:24 env -- scripts/common.sh@345 -- # : 1 00:05:30.689 08:49:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.689 08:49:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.689 08:49:24 env -- scripts/common.sh@365 -- # decimal 1 00:05:30.689 08:49:24 env -- scripts/common.sh@353 -- # local d=1 00:05:30.689 08:49:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.689 08:49:24 env -- scripts/common.sh@355 -- # echo 1 00:05:30.689 08:49:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.689 08:49:24 env -- scripts/common.sh@366 -- # decimal 2 00:05:30.689 08:49:24 env -- scripts/common.sh@353 -- # local d=2 00:05:30.689 08:49:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.689 08:49:24 env -- scripts/common.sh@355 -- # echo 2 00:05:30.689 08:49:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.689 08:49:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.689 08:49:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.689 08:49:24 env -- scripts/common.sh@368 -- # return 0 00:05:30.689 08:49:24 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.689 08:49:24 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:30.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.689 --rc genhtml_branch_coverage=1 00:05:30.689 --rc genhtml_function_coverage=1 00:05:30.689 --rc genhtml_legend=1 00:05:30.689 --rc geninfo_all_blocks=1 00:05:30.689 --rc geninfo_unexecuted_blocks=1 00:05:30.689 00:05:30.689 ' 00:05:30.689 08:49:24 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:30.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.689 --rc genhtml_branch_coverage=1 00:05:30.689 --rc genhtml_function_coverage=1 00:05:30.689 --rc genhtml_legend=1 00:05:30.689 --rc geninfo_all_blocks=1 00:05:30.689 --rc geninfo_unexecuted_blocks=1 00:05:30.689 00:05:30.689 ' 00:05:30.689 08:49:24 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:30.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.689 --rc genhtml_branch_coverage=1 00:05:30.689 --rc genhtml_function_coverage=1 00:05:30.689 --rc genhtml_legend=1 00:05:30.689 --rc geninfo_all_blocks=1 00:05:30.689 --rc geninfo_unexecuted_blocks=1 00:05:30.689 00:05:30.689 ' 00:05:30.689 08:49:24 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:30.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.689 --rc genhtml_branch_coverage=1 00:05:30.689 --rc genhtml_function_coverage=1 00:05:30.689 --rc genhtml_legend=1 00:05:30.689 --rc geninfo_all_blocks=1 00:05:30.689 --rc geninfo_unexecuted_blocks=1 00:05:30.689 00:05:30.689 ' 00:05:30.689 08:49:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:30.689 08:49:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.690 08:49:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.690 08:49:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.690 ************************************ 00:05:30.690 START TEST env_memory 00:05:30.690 ************************************ 00:05:30.690 08:49:24 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:30.950 00:05:30.950 00:05:30.950 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.950 http://cunit.sourceforge.net/ 00:05:30.950 00:05:30.950 00:05:30.950 Suite: memory 00:05:30.950 Test: alloc and free memory map ...[2024-11-28 08:49:24.862455] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:30.950 passed 00:05:30.950 Test: mem map translation ...[2024-11-28 08:49:24.902136] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:30.950 [2024-11-28 08:49:24.902298] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:30.950 [2024-11-28 08:49:24.902409] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:30.950 [2024-11-28 08:49:24.902471] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:30.950 passed 00:05:30.950 Test: mem map registration ...[2024-11-28 08:49:24.970842] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:30.950 [2024-11-28 08:49:24.970988] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:30.950 passed 00:05:30.950 Test: mem map adjacent registrations ...passed 00:05:30.950 00:05:30.950 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.950 suites 1 1 n/a 0 0 00:05:30.950 tests 4 4 4 0 0 00:05:30.950 asserts 152 152 152 0 n/a 00:05:30.950 00:05:30.950 Elapsed time = 0.234 seconds 00:05:31.211 00:05:31.211 real 0m0.274s 00:05:31.211 user 0m0.244s 00:05:31.211 sys 0m0.020s 00:05:31.211 08:49:25 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.211 ************************************ 00:05:31.211 END TEST env_memory 00:05:31.211 ************************************ 00:05:31.211 08:49:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:31.211 08:49:25 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:31.211 08:49:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.211 08:49:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.211 08:49:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.211 ************************************ 00:05:31.211 START TEST env_vtophys 00:05:31.211 ************************************ 00:05:31.211 08:49:25 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:31.211 EAL: lib.eal log level changed from notice to debug 00:05:31.211 EAL: Detected lcore 0 as core 0 on socket 0 00:05:31.211 EAL: Detected lcore 1 as core 0 on socket 0 00:05:31.211 EAL: Detected lcore 2 as core 0 on socket 0 00:05:31.211 EAL: Detected lcore 3 as core 0 on socket 0 00:05:31.211 EAL: Detected lcore 4 as core 0 on socket 0 00:05:31.211 EAL: Detected lcore 5 as core 0 on socket 0 00:05:31.211 EAL: Detected lcore 6 as core 0 on socket 0 00:05:31.211 EAL: Detected lcore 7 as core 0 on socket 0 00:05:31.211 EAL: Detected lcore 8 as core 0 on socket 0 00:05:31.211 EAL: Detected lcore 9 as core 0 on socket 0 00:05:31.211 EAL: Maximum logical cores by configuration: 128 00:05:31.211 EAL: Detected CPU lcores: 10 00:05:31.211 EAL: Detected NUMA nodes: 1 00:05:31.211 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:31.211 EAL: Detected shared linkage of DPDK 00:05:31.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:31.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:31.211 EAL: Registered [vdev] bus. 00:05:31.211 EAL: bus.vdev log level changed from disabled to notice 00:05:31.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:31.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:31.211 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:31.211 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:31.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:31.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:31.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:31.211 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:31.211 EAL: No shared files mode enabled, IPC will be disabled 00:05:31.212 EAL: No shared files mode enabled, IPC is disabled 00:05:31.212 EAL: Selected IOVA mode 'PA' 00:05:31.212 EAL: Probing VFIO support... 00:05:31.212 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:31.212 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:31.212 EAL: Ask a virtual area of 0x2e000 bytes 00:05:31.212 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:31.212 EAL: Setting up physically contiguous memory... 00:05:31.212 EAL: Setting maximum number of open files to 524288 00:05:31.212 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:31.212 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:31.212 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.212 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:31.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.212 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.212 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:31.212 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:31.212 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.212 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:31.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.212 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.212 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:31.212 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:31.212 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.212 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:31.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.212 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.212 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:31.212 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:31.212 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.212 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:31.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.212 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.212 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:31.212 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:31.212 EAL: Hugepages will be freed exactly as allocated. 00:05:31.212 EAL: No shared files mode enabled, IPC is disabled 00:05:31.212 EAL: No shared files mode enabled, IPC is disabled 00:05:31.212 EAL: TSC frequency is ~2600000 KHz 00:05:31.212 EAL: Main lcore 0 is ready (tid=7fb3ac66fa40;cpuset=[0]) 00:05:31.212 EAL: Trying to obtain current memory policy. 00:05:31.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.212 EAL: Restoring previous memory policy: 0 00:05:31.212 EAL: request: mp_malloc_sync 00:05:31.212 EAL: No shared files mode enabled, IPC is disabled 00:05:31.212 EAL: Heap on socket 0 was expanded by 2MB 00:05:31.212 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:31.212 EAL: No shared files mode enabled, IPC is disabled 00:05:31.212 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:31.212 EAL: Mem event callback 'spdk:(nil)' registered 00:05:31.212 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:31.473 00:05:31.473 00:05:31.473 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.473 http://cunit.sourceforge.net/ 00:05:31.473 00:05:31.473 00:05:31.473 Suite: components_suite 00:05:31.734 Test: vtophys_malloc_test ...passed 00:05:31.734 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:31.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.734 EAL: Restoring previous memory policy: 4 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was expanded by 4MB 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was shrunk by 4MB 00:05:31.734 EAL: Trying to obtain current memory policy. 00:05:31.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.734 EAL: Restoring previous memory policy: 4 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was expanded by 6MB 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was shrunk by 6MB 00:05:31.734 EAL: Trying to obtain current memory policy. 00:05:31.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.734 EAL: Restoring previous memory policy: 4 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was expanded by 10MB 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was shrunk by 10MB 00:05:31.734 EAL: Trying to obtain current memory policy. 00:05:31.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.734 EAL: Restoring previous memory policy: 4 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was expanded by 18MB 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was shrunk by 18MB 00:05:31.734 EAL: Trying to obtain current memory policy. 00:05:31.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.734 EAL: Restoring previous memory policy: 4 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was expanded by 34MB 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was shrunk by 34MB 00:05:31.734 EAL: Trying to obtain current memory policy. 00:05:31.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.734 EAL: Restoring previous memory policy: 4 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was expanded by 66MB 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was shrunk by 66MB 00:05:31.734 EAL: Trying to obtain current memory policy. 00:05:31.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.734 EAL: Restoring previous memory policy: 4 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was expanded by 130MB 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was shrunk by 130MB 00:05:31.734 EAL: Trying to obtain current memory policy. 00:05:31.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.734 EAL: Restoring previous memory policy: 4 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.734 EAL: request: mp_malloc_sync 00:05:31.734 EAL: No shared files mode enabled, IPC is disabled 00:05:31.734 EAL: Heap on socket 0 was expanded by 258MB 00:05:31.734 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.995 EAL: request: mp_malloc_sync 00:05:31.995 EAL: No shared files mode enabled, IPC is disabled 00:05:31.995 EAL: Heap on socket 0 was shrunk by 258MB 00:05:31.995 EAL: Trying to obtain current memory policy. 00:05:31.995 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.995 EAL: Restoring previous memory policy: 4 00:05:31.995 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.995 EAL: request: mp_malloc_sync 00:05:31.995 EAL: No shared files mode enabled, IPC is disabled 00:05:31.995 EAL: Heap on socket 0 was expanded by 514MB 00:05:31.995 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.255 EAL: request: mp_malloc_sync 00:05:32.255 EAL: No shared files mode enabled, IPC is disabled 00:05:32.255 EAL: Heap on socket 0 was shrunk by 514MB 00:05:32.255 EAL: Trying to obtain current memory policy. 00:05:32.255 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.515 EAL: Restoring previous memory policy: 4 00:05:32.515 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.515 EAL: request: mp_malloc_sync 00:05:32.515 EAL: No shared files mode enabled, IPC is disabled 00:05:32.515 EAL: Heap on socket 0 was expanded by 1026MB 00:05:32.515 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.774 passed 00:05:32.774 00:05:32.774 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.774 suites 1 1 n/a 0 0 00:05:32.774 tests 2 2 2 0 0 00:05:32.774 asserts 5218 5218 5218 0 n/a 00:05:32.774 00:05:32.774 Elapsed time = 1.335 seconds 00:05:32.774 EAL: request: mp_malloc_sync 00:05:32.774 EAL: No shared files mode enabled, IPC is disabled 00:05:32.774 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:32.774 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.774 EAL: request: mp_malloc_sync 00:05:32.774 EAL: No shared files mode enabled, IPC is disabled 00:05:32.774 EAL: Heap on socket 0 was shrunk by 2MB 00:05:32.774 EAL: No shared files mode enabled, IPC is disabled 00:05:32.774 EAL: No shared files mode enabled, IPC is disabled 00:05:32.774 EAL: No shared files mode enabled, IPC is disabled 00:05:32.774 00:05:32.774 real 0m1.577s 00:05:32.774 user 0m0.648s 00:05:32.774 sys 0m0.790s 00:05:32.774 08:49:26 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.774 ************************************ 00:05:32.774 END TEST env_vtophys 00:05:32.774 ************************************ 00:05:32.774 08:49:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:32.774 08:49:26 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:32.774 08:49:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.774 08:49:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.774 08:49:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.774 ************************************ 00:05:32.774 START TEST env_pci 00:05:32.774 ************************************ 00:05:32.774 08:49:26 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:32.774 00:05:32.774 00:05:32.774 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.774 http://cunit.sourceforge.net/ 00:05:32.774 00:05:32.774 00:05:32.774 Suite: pci 00:05:32.774 Test: pci_hook ...[2024-11-28 08:49:26.823392] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69804 has claimed it 00:05:32.774 passed 00:05:32.774 00:05:32.774 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.774 suites 1 1 n/a 0 0 00:05:32.774 tests 1 1 1 0 0 00:05:32.774 asserts 25 25 25 0 n/a 00:05:32.774 00:05:32.774 Elapsed time = 0.003 seconds 00:05:32.774 EAL: Cannot find device (10000:00:01.0) 00:05:32.774 EAL: Failed to attach device on primary process 00:05:32.774 00:05:32.774 real 0m0.049s 00:05:32.774 user 0m0.022s 00:05:32.774 sys 0m0.027s 00:05:32.774 08:49:26 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.774 08:49:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:32.774 ************************************ 00:05:32.774 END TEST env_pci 00:05:32.775 ************************************ 00:05:33.057 08:49:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:33.057 08:49:26 env -- env/env.sh@15 -- # uname 00:05:33.057 08:49:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:33.057 08:49:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:33.057 08:49:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:33.057 08:49:26 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:33.057 08:49:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.057 08:49:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.057 ************************************ 00:05:33.057 START TEST env_dpdk_post_init 00:05:33.057 ************************************ 00:05:33.057 08:49:26 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:33.057 EAL: Detected CPU lcores: 10 00:05:33.057 EAL: Detected NUMA nodes: 1 00:05:33.057 EAL: Detected shared linkage of DPDK 00:05:33.057 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.057 EAL: Selected IOVA mode 'PA' 00:05:33.057 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:33.057 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:33.057 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:33.057 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:33.057 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:33.057 Starting DPDK initialization... 00:05:33.057 Starting SPDK post initialization... 00:05:33.057 SPDK NVMe probe 00:05:33.057 Attaching to 0000:00:10.0 00:05:33.057 Attaching to 0000:00:11.0 00:05:33.057 Attaching to 0000:00:12.0 00:05:33.057 Attaching to 0000:00:13.0 00:05:33.057 Attached to 0000:00:10.0 00:05:33.057 Attached to 0000:00:11.0 00:05:33.057 Attached to 0000:00:13.0 00:05:33.057 Attached to 0000:00:12.0 00:05:33.057 Cleaning up... 00:05:33.057 00:05:33.057 real 0m0.232s 00:05:33.057 user 0m0.062s 00:05:33.057 sys 0m0.070s 00:05:33.057 08:49:27 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.057 08:49:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.057 ************************************ 00:05:33.057 END TEST env_dpdk_post_init 00:05:33.057 ************************************ 00:05:33.317 08:49:27 env -- env/env.sh@26 -- # uname 00:05:33.317 08:49:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:33.317 08:49:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.317 08:49:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.317 08:49:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.317 08:49:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.317 ************************************ 00:05:33.317 START TEST env_mem_callbacks 00:05:33.317 ************************************ 00:05:33.317 08:49:27 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.317 EAL: Detected CPU lcores: 10 00:05:33.317 EAL: Detected NUMA nodes: 1 00:05:33.317 EAL: Detected shared linkage of DPDK 00:05:33.317 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.317 EAL: Selected IOVA mode 'PA' 00:05:33.317 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:33.317 00:05:33.317 00:05:33.317 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.317 http://cunit.sourceforge.net/ 00:05:33.317 00:05:33.317 00:05:33.317 Suite: memory 00:05:33.317 Test: test ... 00:05:33.317 register 0x200000200000 2097152 00:05:33.317 malloc 3145728 00:05:33.317 register 0x200000400000 4194304 00:05:33.317 buf 0x200000500000 len 3145728 PASSED 00:05:33.317 malloc 64 00:05:33.317 buf 0x2000004fff40 len 64 PASSED 00:05:33.317 malloc 4194304 00:05:33.317 register 0x200000800000 6291456 00:05:33.317 buf 0x200000a00000 len 4194304 PASSED 00:05:33.317 free 0x200000500000 3145728 00:05:33.317 free 0x2000004fff40 64 00:05:33.317 unregister 0x200000400000 4194304 PASSED 00:05:33.317 free 0x200000a00000 4194304 00:05:33.317 unregister 0x200000800000 6291456 PASSED 00:05:33.317 malloc 8388608 00:05:33.317 register 0x200000400000 10485760 00:05:33.317 buf 0x200000600000 len 8388608 PASSED 00:05:33.317 free 0x200000600000 8388608 00:05:33.317 unregister 0x200000400000 10485760 PASSED 00:05:33.317 passed 00:05:33.317 00:05:33.317 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.317 suites 1 1 n/a 0 0 00:05:33.317 tests 1 1 1 0 0 00:05:33.317 asserts 15 15 15 0 n/a 00:05:33.317 00:05:33.317 Elapsed time = 0.011 seconds 00:05:33.317 00:05:33.317 real 0m0.177s 00:05:33.317 user 0m0.024s 00:05:33.317 sys 0m0.049s 00:05:33.317 ************************************ 00:05:33.317 END TEST env_mem_callbacks 00:05:33.317 ************************************ 00:05:33.317 08:49:27 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.317 08:49:27 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:33.577 ************************************ 00:05:33.577 END TEST env 00:05:33.577 ************************************ 00:05:33.577 00:05:33.577 real 0m2.820s 00:05:33.577 user 0m1.171s 00:05:33.577 sys 0m1.181s 00:05:33.577 08:49:27 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.577 08:49:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.577 08:49:27 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:33.577 08:49:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.577 08:49:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.577 08:49:27 -- common/autotest_common.sh@10 -- # set +x 00:05:33.577 ************************************ 00:05:33.577 START TEST rpc 00:05:33.577 ************************************ 00:05:33.577 08:49:27 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:33.577 * Looking for test storage... 00:05:33.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:33.577 08:49:27 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:33.577 08:49:27 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:33.577 08:49:27 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:33.577 08:49:27 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:33.577 08:49:27 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.577 08:49:27 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.577 08:49:27 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.577 08:49:27 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.577 08:49:27 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.577 08:49:27 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.577 08:49:27 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.577 08:49:27 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.577 08:49:27 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.577 08:49:27 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.577 08:49:27 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.577 08:49:27 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:33.577 08:49:27 rpc -- scripts/common.sh@345 -- # : 1 00:05:33.577 08:49:27 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.577 08:49:27 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.577 08:49:27 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:33.577 08:49:27 rpc -- scripts/common.sh@353 -- # local d=1 00:05:33.577 08:49:27 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.577 08:49:27 rpc -- scripts/common.sh@355 -- # echo 1 00:05:33.577 08:49:27 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.577 08:49:27 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:33.577 08:49:27 rpc -- scripts/common.sh@353 -- # local d=2 00:05:33.577 08:49:27 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.577 08:49:27 rpc -- scripts/common.sh@355 -- # echo 2 00:05:33.577 08:49:27 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.577 08:49:27 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.577 08:49:27 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.577 08:49:27 rpc -- scripts/common.sh@368 -- # return 0 00:05:33.578 08:49:27 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.578 08:49:27 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.578 --rc genhtml_branch_coverage=1 00:05:33.578 --rc genhtml_function_coverage=1 00:05:33.578 --rc genhtml_legend=1 00:05:33.578 --rc geninfo_all_blocks=1 00:05:33.578 --rc geninfo_unexecuted_blocks=1 00:05:33.578 00:05:33.578 ' 00:05:33.578 08:49:27 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.578 --rc genhtml_branch_coverage=1 00:05:33.578 --rc genhtml_function_coverage=1 00:05:33.578 --rc genhtml_legend=1 00:05:33.578 --rc geninfo_all_blocks=1 00:05:33.578 --rc geninfo_unexecuted_blocks=1 00:05:33.578 00:05:33.578 ' 00:05:33.578 08:49:27 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.578 --rc genhtml_branch_coverage=1 00:05:33.578 --rc genhtml_function_coverage=1 00:05:33.578 --rc genhtml_legend=1 00:05:33.578 --rc geninfo_all_blocks=1 00:05:33.578 --rc geninfo_unexecuted_blocks=1 00:05:33.578 00:05:33.578 ' 00:05:33.578 08:49:27 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:33.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.578 --rc genhtml_branch_coverage=1 00:05:33.578 --rc genhtml_function_coverage=1 00:05:33.578 --rc genhtml_legend=1 00:05:33.578 --rc geninfo_all_blocks=1 00:05:33.578 --rc geninfo_unexecuted_blocks=1 00:05:33.578 00:05:33.578 ' 00:05:33.578 08:49:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69931 00:05:33.578 08:49:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.578 08:49:27 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:33.578 08:49:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69931 00:05:33.578 08:49:27 rpc -- common/autotest_common.sh@831 -- # '[' -z 69931 ']' 00:05:33.578 08:49:27 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.578 08:49:27 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.578 08:49:27 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.578 08:49:27 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.578 08:49:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.837 [2024-11-28 08:49:27.757935] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:33.837 [2024-11-28 08:49:27.758280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69931 ] 00:05:33.837 [2024-11-28 08:49:27.909142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.097 [2024-11-28 08:49:27.974069] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:34.097 [2024-11-28 08:49:27.974316] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69931' to capture a snapshot of events at runtime. 00:05:34.097 [2024-11-28 08:49:27.974398] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.097 [2024-11-28 08:49:27.974431] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.097 [2024-11-28 08:49:27.974459] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69931 for offline analysis/debug. 00:05:34.097 [2024-11-28 08:49:27.974525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.663 08:49:28 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.663 08:49:28 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:34.663 08:49:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.663 08:49:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.663 08:49:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:34.663 08:49:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:34.663 08:49:28 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.663 08:49:28 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.663 08:49:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.663 ************************************ 00:05:34.663 START TEST rpc_integrity 00:05:34.663 ************************************ 00:05:34.663 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:34.663 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:34.663 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.663 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.663 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.663 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:34.663 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:34.663 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:34.663 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:34.663 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.663 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.663 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.663 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:34.663 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:34.664 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.664 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.664 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.664 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:34.664 { 00:05:34.664 "name": "Malloc0", 00:05:34.664 "aliases": [ 00:05:34.664 "51d0c14b-e227-4692-a1d3-d10e75d63694" 00:05:34.664 ], 00:05:34.664 "product_name": "Malloc disk", 00:05:34.664 "block_size": 512, 00:05:34.664 "num_blocks": 16384, 00:05:34.664 "uuid": "51d0c14b-e227-4692-a1d3-d10e75d63694", 00:05:34.664 "assigned_rate_limits": { 00:05:34.664 "rw_ios_per_sec": 0, 00:05:34.664 "rw_mbytes_per_sec": 0, 00:05:34.664 "r_mbytes_per_sec": 0, 00:05:34.664 "w_mbytes_per_sec": 0 00:05:34.664 }, 00:05:34.664 "claimed": false, 00:05:34.664 "zoned": false, 00:05:34.664 "supported_io_types": { 00:05:34.664 "read": true, 00:05:34.664 "write": true, 00:05:34.664 "unmap": true, 00:05:34.664 "flush": true, 00:05:34.664 "reset": true, 00:05:34.664 "nvme_admin": false, 00:05:34.664 "nvme_io": false, 00:05:34.664 "nvme_io_md": false, 00:05:34.664 "write_zeroes": true, 00:05:34.664 "zcopy": true, 00:05:34.664 "get_zone_info": false, 00:05:34.664 "zone_management": false, 00:05:34.664 "zone_append": false, 00:05:34.664 "compare": false, 00:05:34.664 "compare_and_write": false, 00:05:34.664 "abort": true, 00:05:34.664 "seek_hole": false, 00:05:34.664 "seek_data": false, 00:05:34.664 "copy": true, 00:05:34.664 "nvme_iov_md": false 00:05:34.664 }, 00:05:34.664 "memory_domains": [ 00:05:34.664 { 00:05:34.664 "dma_device_id": "system", 00:05:34.664 "dma_device_type": 1 00:05:34.664 }, 00:05:34.664 { 00:05:34.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.664 "dma_device_type": 2 00:05:34.664 } 00:05:34.664 ], 00:05:34.664 "driver_specific": {} 00:05:34.664 } 00:05:34.664 ]' 00:05:34.664 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:34.664 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:34.664 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:34.664 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.664 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.664 [2024-11-28 08:49:28.721002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:34.664 [2024-11-28 08:49:28.721059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.664 [2024-11-28 08:49:28.721081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:34.664 [2024-11-28 08:49:28.721091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.664 [2024-11-28 08:49:28.723346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.664 [2024-11-28 08:49:28.723382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:34.664 Passthru0 00:05:34.664 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.664 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:34.664 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.664 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.664 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.664 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:34.664 { 00:05:34.664 "name": "Malloc0", 00:05:34.664 "aliases": [ 00:05:34.664 "51d0c14b-e227-4692-a1d3-d10e75d63694" 00:05:34.664 ], 00:05:34.664 "product_name": "Malloc disk", 00:05:34.664 "block_size": 512, 00:05:34.664 "num_blocks": 16384, 00:05:34.664 "uuid": "51d0c14b-e227-4692-a1d3-d10e75d63694", 00:05:34.664 "assigned_rate_limits": { 00:05:34.664 "rw_ios_per_sec": 0, 00:05:34.664 "rw_mbytes_per_sec": 0, 00:05:34.664 "r_mbytes_per_sec": 0, 00:05:34.664 "w_mbytes_per_sec": 0 00:05:34.664 }, 00:05:34.664 "claimed": true, 00:05:34.664 "claim_type": "exclusive_write", 00:05:34.664 "zoned": false, 00:05:34.664 "supported_io_types": { 00:05:34.664 "read": true, 00:05:34.664 "write": true, 00:05:34.664 "unmap": true, 00:05:34.664 "flush": true, 00:05:34.664 "reset": true, 00:05:34.664 "nvme_admin": false, 00:05:34.664 "nvme_io": false, 00:05:34.664 "nvme_io_md": false, 00:05:34.664 "write_zeroes": true, 00:05:34.664 "zcopy": true, 00:05:34.664 "get_zone_info": false, 00:05:34.664 "zone_management": false, 00:05:34.664 "zone_append": false, 00:05:34.664 "compare": false, 00:05:34.664 "compare_and_write": false, 00:05:34.664 "abort": true, 00:05:34.664 "seek_hole": false, 00:05:34.664 "seek_data": false, 00:05:34.664 "copy": true, 00:05:34.664 "nvme_iov_md": false 00:05:34.664 }, 00:05:34.664 "memory_domains": [ 00:05:34.664 { 00:05:34.664 "dma_device_id": "system", 00:05:34.664 "dma_device_type": 1 00:05:34.664 }, 00:05:34.664 { 00:05:34.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.664 "dma_device_type": 2 00:05:34.664 } 00:05:34.664 ], 00:05:34.664 "driver_specific": {} 00:05:34.664 }, 00:05:34.664 { 00:05:34.664 "name": "Passthru0", 00:05:34.664 "aliases": [ 00:05:34.664 "cbaba9c0-3662-504d-9736-63de3c6ff9c2" 00:05:34.664 ], 00:05:34.664 "product_name": "passthru", 00:05:34.664 "block_size": 512, 00:05:34.664 "num_blocks": 16384, 00:05:34.664 "uuid": "cbaba9c0-3662-504d-9736-63de3c6ff9c2", 00:05:34.664 "assigned_rate_limits": { 00:05:34.664 "rw_ios_per_sec": 0, 00:05:34.664 "rw_mbytes_per_sec": 0, 00:05:34.664 "r_mbytes_per_sec": 0, 00:05:34.664 "w_mbytes_per_sec": 0 00:05:34.664 }, 00:05:34.664 "claimed": false, 00:05:34.664 "zoned": false, 00:05:34.664 "supported_io_types": { 00:05:34.664 "read": true, 00:05:34.664 "write": true, 00:05:34.664 "unmap": true, 00:05:34.664 "flush": true, 00:05:34.664 "reset": true, 00:05:34.664 "nvme_admin": false, 00:05:34.664 "nvme_io": false, 00:05:34.664 "nvme_io_md": false, 00:05:34.664 "write_zeroes": true, 00:05:34.664 "zcopy": true, 00:05:34.664 "get_zone_info": false, 00:05:34.664 "zone_management": false, 00:05:34.664 "zone_append": false, 00:05:34.664 "compare": false, 00:05:34.664 "compare_and_write": false, 00:05:34.664 "abort": true, 00:05:34.664 "seek_hole": false, 00:05:34.664 "seek_data": false, 00:05:34.664 "copy": true, 00:05:34.664 "nvme_iov_md": false 00:05:34.664 }, 00:05:34.664 "memory_domains": [ 00:05:34.664 { 00:05:34.664 "dma_device_id": "system", 00:05:34.664 "dma_device_type": 1 00:05:34.664 }, 00:05:34.664 { 00:05:34.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.664 "dma_device_type": 2 00:05:34.664 } 00:05:34.664 ], 00:05:34.664 "driver_specific": { 00:05:34.664 "passthru": { 00:05:34.664 "name": "Passthru0", 00:05:34.664 "base_bdev_name": "Malloc0" 00:05:34.664 } 00:05:34.664 } 00:05:34.664 } 00:05:34.664 ]' 00:05:34.664 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:34.664 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:34.664 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:34.664 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.664 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.923 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.923 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:34.923 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.923 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.923 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.923 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:34.923 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.923 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.923 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.923 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.923 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:34.923 ************************************ 00:05:34.923 END TEST rpc_integrity 00:05:34.923 ************************************ 00:05:34.923 08:49:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:34.923 00:05:34.923 real 0m0.224s 00:05:34.923 user 0m0.130s 00:05:34.923 sys 0m0.029s 00:05:34.923 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.923 08:49:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.923 08:49:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:34.923 08:49:28 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.923 08:49:28 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.923 08:49:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.923 ************************************ 00:05:34.923 START TEST rpc_plugins 00:05:34.923 ************************************ 00:05:34.923 08:49:28 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:34.923 08:49:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:34.923 08:49:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.923 08:49:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.923 08:49:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.923 08:49:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:34.923 08:49:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:34.923 08:49:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.923 08:49:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.923 08:49:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.923 08:49:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:34.923 { 00:05:34.923 "name": "Malloc1", 00:05:34.923 "aliases": [ 00:05:34.923 "5e18bf71-9afa-4eea-8622-5b2eed9f222c" 00:05:34.923 ], 00:05:34.923 "product_name": "Malloc disk", 00:05:34.923 "block_size": 4096, 00:05:34.923 "num_blocks": 256, 00:05:34.923 "uuid": "5e18bf71-9afa-4eea-8622-5b2eed9f222c", 00:05:34.923 "assigned_rate_limits": { 00:05:34.923 "rw_ios_per_sec": 0, 00:05:34.923 "rw_mbytes_per_sec": 0, 00:05:34.923 "r_mbytes_per_sec": 0, 00:05:34.923 "w_mbytes_per_sec": 0 00:05:34.923 }, 00:05:34.923 "claimed": false, 00:05:34.923 "zoned": false, 00:05:34.923 "supported_io_types": { 00:05:34.923 "read": true, 00:05:34.923 "write": true, 00:05:34.923 "unmap": true, 00:05:34.923 "flush": true, 00:05:34.923 "reset": true, 00:05:34.923 "nvme_admin": false, 00:05:34.923 "nvme_io": false, 00:05:34.923 "nvme_io_md": false, 00:05:34.923 "write_zeroes": true, 00:05:34.923 "zcopy": true, 00:05:34.923 "get_zone_info": false, 00:05:34.923 "zone_management": false, 00:05:34.923 "zone_append": false, 00:05:34.923 "compare": false, 00:05:34.923 "compare_and_write": false, 00:05:34.923 "abort": true, 00:05:34.923 "seek_hole": false, 00:05:34.923 "seek_data": false, 00:05:34.923 "copy": true, 00:05:34.923 "nvme_iov_md": false 00:05:34.923 }, 00:05:34.923 "memory_domains": [ 00:05:34.923 { 00:05:34.923 "dma_device_id": "system", 00:05:34.923 "dma_device_type": 1 00:05:34.923 }, 00:05:34.923 { 00:05:34.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.923 "dma_device_type": 2 00:05:34.923 } 00:05:34.923 ], 00:05:34.923 "driver_specific": {} 00:05:34.923 } 00:05:34.923 ]' 00:05:34.923 08:49:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:34.923 08:49:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:34.923 08:49:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:34.923 08:49:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.923 08:49:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.923 08:49:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.923 08:49:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:34.923 08:49:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.923 08:49:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.923 08:49:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.923 08:49:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:34.923 08:49:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:34.923 ************************************ 00:05:34.923 END TEST rpc_plugins 00:05:34.923 ************************************ 00:05:34.923 08:49:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:34.923 00:05:34.923 real 0m0.105s 00:05:34.923 user 0m0.058s 00:05:34.923 sys 0m0.017s 00:05:34.923 08:49:29 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.923 08:49:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.182 08:49:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:35.182 08:49:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.182 08:49:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.182 08:49:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.182 ************************************ 00:05:35.182 START TEST rpc_trace_cmd_test 00:05:35.182 ************************************ 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:35.182 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69931", 00:05:35.182 "tpoint_group_mask": "0x8", 00:05:35.182 "iscsi_conn": { 00:05:35.182 "mask": "0x2", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 }, 00:05:35.182 "scsi": { 00:05:35.182 "mask": "0x4", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 }, 00:05:35.182 "bdev": { 00:05:35.182 "mask": "0x8", 00:05:35.182 "tpoint_mask": "0xffffffffffffffff" 00:05:35.182 }, 00:05:35.182 "nvmf_rdma": { 00:05:35.182 "mask": "0x10", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 }, 00:05:35.182 "nvmf_tcp": { 00:05:35.182 "mask": "0x20", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 }, 00:05:35.182 "ftl": { 00:05:35.182 "mask": "0x40", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 }, 00:05:35.182 "blobfs": { 00:05:35.182 "mask": "0x80", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 }, 00:05:35.182 "dsa": { 00:05:35.182 "mask": "0x200", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 }, 00:05:35.182 "thread": { 00:05:35.182 "mask": "0x400", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 }, 00:05:35.182 "nvme_pcie": { 00:05:35.182 "mask": "0x800", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 }, 00:05:35.182 "iaa": { 00:05:35.182 "mask": "0x1000", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 }, 00:05:35.182 "nvme_tcp": { 00:05:35.182 "mask": "0x2000", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 }, 00:05:35.182 "bdev_nvme": { 00:05:35.182 "mask": "0x4000", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 }, 00:05:35.182 "sock": { 00:05:35.182 "mask": "0x8000", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 }, 00:05:35.182 "blob": { 00:05:35.182 "mask": "0x10000", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 }, 00:05:35.182 "bdev_raid": { 00:05:35.182 "mask": "0x20000", 00:05:35.182 "tpoint_mask": "0x0" 00:05:35.182 } 00:05:35.182 }' 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:35.182 ************************************ 00:05:35.182 END TEST rpc_trace_cmd_test 00:05:35.182 ************************************ 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:35.182 00:05:35.182 real 0m0.176s 00:05:35.182 user 0m0.140s 00:05:35.182 sys 0m0.026s 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.182 08:49:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:35.182 08:49:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:35.182 08:49:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:35.182 08:49:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:35.182 08:49:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.182 08:49:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.182 08:49:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.182 ************************************ 00:05:35.182 START TEST rpc_daemon_integrity 00:05:35.182 ************************************ 00:05:35.182 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:35.182 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:35.182 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.182 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:35.441 { 00:05:35.441 "name": "Malloc2", 00:05:35.441 "aliases": [ 00:05:35.441 "bfc263b5-26b8-428f-9ec7-aa3769de481b" 00:05:35.441 ], 00:05:35.441 "product_name": "Malloc disk", 00:05:35.441 "block_size": 512, 00:05:35.441 "num_blocks": 16384, 00:05:35.441 "uuid": "bfc263b5-26b8-428f-9ec7-aa3769de481b", 00:05:35.441 "assigned_rate_limits": { 00:05:35.441 "rw_ios_per_sec": 0, 00:05:35.441 "rw_mbytes_per_sec": 0, 00:05:35.441 "r_mbytes_per_sec": 0, 00:05:35.441 "w_mbytes_per_sec": 0 00:05:35.441 }, 00:05:35.441 "claimed": false, 00:05:35.441 "zoned": false, 00:05:35.441 "supported_io_types": { 00:05:35.441 "read": true, 00:05:35.441 "write": true, 00:05:35.441 "unmap": true, 00:05:35.441 "flush": true, 00:05:35.441 "reset": true, 00:05:35.441 "nvme_admin": false, 00:05:35.441 "nvme_io": false, 00:05:35.441 "nvme_io_md": false, 00:05:35.441 "write_zeroes": true, 00:05:35.441 "zcopy": true, 00:05:35.441 "get_zone_info": false, 00:05:35.441 "zone_management": false, 00:05:35.441 "zone_append": false, 00:05:35.441 "compare": false, 00:05:35.441 "compare_and_write": false, 00:05:35.441 "abort": true, 00:05:35.441 "seek_hole": false, 00:05:35.441 "seek_data": false, 00:05:35.441 "copy": true, 00:05:35.441 "nvme_iov_md": false 00:05:35.441 }, 00:05:35.441 "memory_domains": [ 00:05:35.441 { 00:05:35.441 "dma_device_id": "system", 00:05:35.441 "dma_device_type": 1 00:05:35.441 }, 00:05:35.441 { 00:05:35.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.441 "dma_device_type": 2 00:05:35.441 } 00:05:35.441 ], 00:05:35.441 "driver_specific": {} 00:05:35.441 } 00:05:35.441 ]' 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.441 [2024-11-28 08:49:29.405350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:35.441 [2024-11-28 08:49:29.405407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.441 [2024-11-28 08:49:29.405429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:35.441 [2024-11-28 08:49:29.405438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.441 [2024-11-28 08:49:29.407633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.441 [2024-11-28 08:49:29.407669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:35.441 Passthru0 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.441 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:35.441 { 00:05:35.441 "name": "Malloc2", 00:05:35.441 "aliases": [ 00:05:35.441 "bfc263b5-26b8-428f-9ec7-aa3769de481b" 00:05:35.441 ], 00:05:35.441 "product_name": "Malloc disk", 00:05:35.441 "block_size": 512, 00:05:35.442 "num_blocks": 16384, 00:05:35.442 "uuid": "bfc263b5-26b8-428f-9ec7-aa3769de481b", 00:05:35.442 "assigned_rate_limits": { 00:05:35.442 "rw_ios_per_sec": 0, 00:05:35.442 "rw_mbytes_per_sec": 0, 00:05:35.442 "r_mbytes_per_sec": 0, 00:05:35.442 "w_mbytes_per_sec": 0 00:05:35.442 }, 00:05:35.442 "claimed": true, 00:05:35.442 "claim_type": "exclusive_write", 00:05:35.442 "zoned": false, 00:05:35.442 "supported_io_types": { 00:05:35.442 "read": true, 00:05:35.442 "write": true, 00:05:35.442 "unmap": true, 00:05:35.442 "flush": true, 00:05:35.442 "reset": true, 00:05:35.442 "nvme_admin": false, 00:05:35.442 "nvme_io": false, 00:05:35.442 "nvme_io_md": false, 00:05:35.442 "write_zeroes": true, 00:05:35.442 "zcopy": true, 00:05:35.442 "get_zone_info": false, 00:05:35.442 "zone_management": false, 00:05:35.442 "zone_append": false, 00:05:35.442 "compare": false, 00:05:35.442 "compare_and_write": false, 00:05:35.442 "abort": true, 00:05:35.442 "seek_hole": false, 00:05:35.442 "seek_data": false, 00:05:35.442 "copy": true, 00:05:35.442 "nvme_iov_md": false 00:05:35.442 }, 00:05:35.442 "memory_domains": [ 00:05:35.442 { 00:05:35.442 "dma_device_id": "system", 00:05:35.442 "dma_device_type": 1 00:05:35.442 }, 00:05:35.442 { 00:05:35.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.442 "dma_device_type": 2 00:05:35.442 } 00:05:35.442 ], 00:05:35.442 "driver_specific": {} 00:05:35.442 }, 00:05:35.442 { 00:05:35.442 "name": "Passthru0", 00:05:35.442 "aliases": [ 00:05:35.442 "2dc9deeb-6077-5c1e-968d-e0ae96ad0508" 00:05:35.442 ], 00:05:35.442 "product_name": "passthru", 00:05:35.442 "block_size": 512, 00:05:35.442 "num_blocks": 16384, 00:05:35.442 "uuid": "2dc9deeb-6077-5c1e-968d-e0ae96ad0508", 00:05:35.442 "assigned_rate_limits": { 00:05:35.442 "rw_ios_per_sec": 0, 00:05:35.442 "rw_mbytes_per_sec": 0, 00:05:35.442 "r_mbytes_per_sec": 0, 00:05:35.442 "w_mbytes_per_sec": 0 00:05:35.442 }, 00:05:35.442 "claimed": false, 00:05:35.442 "zoned": false, 00:05:35.442 "supported_io_types": { 00:05:35.442 "read": true, 00:05:35.442 "write": true, 00:05:35.442 "unmap": true, 00:05:35.442 "flush": true, 00:05:35.442 "reset": true, 00:05:35.442 "nvme_admin": false, 00:05:35.442 "nvme_io": false, 00:05:35.442 "nvme_io_md": false, 00:05:35.442 "write_zeroes": true, 00:05:35.442 "zcopy": true, 00:05:35.442 "get_zone_info": false, 00:05:35.442 "zone_management": false, 00:05:35.442 "zone_append": false, 00:05:35.442 "compare": false, 00:05:35.442 "compare_and_write": false, 00:05:35.442 "abort": true, 00:05:35.442 "seek_hole": false, 00:05:35.442 "seek_data": false, 00:05:35.442 "copy": true, 00:05:35.442 "nvme_iov_md": false 00:05:35.442 }, 00:05:35.442 "memory_domains": [ 00:05:35.442 { 00:05:35.442 "dma_device_id": "system", 00:05:35.442 "dma_device_type": 1 00:05:35.442 }, 00:05:35.442 { 00:05:35.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.442 "dma_device_type": 2 00:05:35.442 } 00:05:35.442 ], 00:05:35.442 "driver_specific": { 00:05:35.442 "passthru": { 00:05:35.442 "name": "Passthru0", 00:05:35.442 "base_bdev_name": "Malloc2" 00:05:35.442 } 00:05:35.442 } 00:05:35.442 } 00:05:35.442 ]' 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:35.442 ************************************ 00:05:35.442 END TEST rpc_daemon_integrity 00:05:35.442 ************************************ 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:35.442 00:05:35.442 real 0m0.221s 00:05:35.442 user 0m0.133s 00:05:35.442 sys 0m0.028s 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.442 08:49:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.442 08:49:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:35.442 08:49:29 rpc -- rpc/rpc.sh@84 -- # killprocess 69931 00:05:35.442 08:49:29 rpc -- common/autotest_common.sh@950 -- # '[' -z 69931 ']' 00:05:35.442 08:49:29 rpc -- common/autotest_common.sh@954 -- # kill -0 69931 00:05:35.442 08:49:29 rpc -- common/autotest_common.sh@955 -- # uname 00:05:35.442 08:49:29 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.442 08:49:29 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69931 00:05:35.700 killing process with pid 69931 00:05:35.700 08:49:29 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.700 08:49:29 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.700 08:49:29 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69931' 00:05:35.700 08:49:29 rpc -- common/autotest_common.sh@969 -- # kill 69931 00:05:35.700 08:49:29 rpc -- common/autotest_common.sh@974 -- # wait 69931 00:05:35.959 ************************************ 00:05:35.959 END TEST rpc 00:05:35.959 ************************************ 00:05:35.959 00:05:35.959 real 0m2.324s 00:05:35.959 user 0m2.700s 00:05:35.959 sys 0m0.662s 00:05:35.959 08:49:29 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.959 08:49:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.959 08:49:29 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:35.959 08:49:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.959 08:49:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.959 08:49:29 -- common/autotest_common.sh@10 -- # set +x 00:05:35.959 ************************************ 00:05:35.959 START TEST skip_rpc 00:05:35.959 ************************************ 00:05:35.959 08:49:29 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:35.959 * Looking for test storage... 00:05:35.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:35.959 08:49:29 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:35.959 08:49:29 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:35.959 08:49:29 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:35.959 08:49:30 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.959 08:49:30 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:35.959 08:49:30 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.959 08:49:30 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:35.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.959 --rc genhtml_branch_coverage=1 00:05:35.959 --rc genhtml_function_coverage=1 00:05:35.959 --rc genhtml_legend=1 00:05:35.959 --rc geninfo_all_blocks=1 00:05:35.959 --rc geninfo_unexecuted_blocks=1 00:05:35.959 00:05:35.959 ' 00:05:35.959 08:49:30 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:35.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.959 --rc genhtml_branch_coverage=1 00:05:35.959 --rc genhtml_function_coverage=1 00:05:35.959 --rc genhtml_legend=1 00:05:35.959 --rc geninfo_all_blocks=1 00:05:35.959 --rc geninfo_unexecuted_blocks=1 00:05:35.959 00:05:35.959 ' 00:05:35.959 08:49:30 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:35.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.959 --rc genhtml_branch_coverage=1 00:05:35.959 --rc genhtml_function_coverage=1 00:05:35.959 --rc genhtml_legend=1 00:05:35.959 --rc geninfo_all_blocks=1 00:05:35.959 --rc geninfo_unexecuted_blocks=1 00:05:35.959 00:05:35.959 ' 00:05:35.959 08:49:30 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:35.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.959 --rc genhtml_branch_coverage=1 00:05:35.959 --rc genhtml_function_coverage=1 00:05:35.959 --rc genhtml_legend=1 00:05:35.959 --rc geninfo_all_blocks=1 00:05:35.959 --rc geninfo_unexecuted_blocks=1 00:05:35.959 00:05:35.959 ' 00:05:35.959 08:49:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:35.959 08:49:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:35.959 08:49:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:35.959 08:49:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.959 08:49:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.959 08:49:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.218 ************************************ 00:05:36.218 START TEST skip_rpc 00:05:36.218 ************************************ 00:05:36.218 08:49:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:36.218 08:49:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70133 00:05:36.218 08:49:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.218 08:49:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:36.218 08:49:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:36.218 [2024-11-28 08:49:30.151990] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:36.218 [2024-11-28 08:49:30.152106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70133 ] 00:05:36.218 [2024-11-28 08:49:30.292260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.218 [2024-11-28 08:49:30.324777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70133 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70133 ']' 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70133 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70133 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.486 killing process with pid 70133 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70133' 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70133 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70133 00:05:41.486 00:05:41.486 real 0m5.265s 00:05:41.486 user 0m4.924s 00:05:41.486 sys 0m0.238s 00:05:41.486 ************************************ 00:05:41.486 END TEST skip_rpc 00:05:41.486 ************************************ 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.486 08:49:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.486 08:49:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:41.486 08:49:35 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.486 08:49:35 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.486 08:49:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.486 ************************************ 00:05:41.486 START TEST skip_rpc_with_json 00:05:41.486 ************************************ 00:05:41.486 08:49:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:41.486 08:49:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:41.486 08:49:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70215 00:05:41.486 08:49:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.486 08:49:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.486 08:49:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70215 00:05:41.486 08:49:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70215 ']' 00:05:41.486 08:49:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.486 08:49:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.486 08:49:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.486 08:49:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.486 08:49:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.486 [2024-11-28 08:49:35.470501] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:41.486 [2024-11-28 08:49:35.470609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70215 ] 00:05:41.744 [2024-11-28 08:49:35.618323] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.744 [2024-11-28 08:49:35.646793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.310 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.310 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:42.310 08:49:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:42.310 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.310 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.310 [2024-11-28 08:49:36.310483] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:42.310 request: 00:05:42.310 { 00:05:42.310 "trtype": "tcp", 00:05:42.310 "method": "nvmf_get_transports", 00:05:42.310 "req_id": 1 00:05:42.310 } 00:05:42.310 Got JSON-RPC error response 00:05:42.310 response: 00:05:42.310 { 00:05:42.310 "code": -19, 00:05:42.310 "message": "No such device" 00:05:42.310 } 00:05:42.310 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:42.310 08:49:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:42.310 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.310 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.310 [2024-11-28 08:49:36.322573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.310 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.310 08:49:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:42.310 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.310 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.569 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.569 08:49:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:42.569 { 00:05:42.569 "subsystems": [ 00:05:42.569 { 00:05:42.569 "subsystem": "fsdev", 00:05:42.569 "config": [ 00:05:42.569 { 00:05:42.569 "method": "fsdev_set_opts", 00:05:42.569 "params": { 00:05:42.569 "fsdev_io_pool_size": 65535, 00:05:42.569 "fsdev_io_cache_size": 256 00:05:42.569 } 00:05:42.569 } 00:05:42.569 ] 00:05:42.569 }, 00:05:42.569 { 00:05:42.569 "subsystem": "keyring", 00:05:42.569 "config": [] 00:05:42.569 }, 00:05:42.569 { 00:05:42.569 "subsystem": "iobuf", 00:05:42.569 "config": [ 00:05:42.569 { 00:05:42.569 "method": "iobuf_set_options", 00:05:42.569 "params": { 00:05:42.569 "small_pool_count": 8192, 00:05:42.569 "large_pool_count": 1024, 00:05:42.569 "small_bufsize": 8192, 00:05:42.569 "large_bufsize": 135168 00:05:42.569 } 00:05:42.569 } 00:05:42.569 ] 00:05:42.569 }, 00:05:42.569 { 00:05:42.569 "subsystem": "sock", 00:05:42.569 "config": [ 00:05:42.569 { 00:05:42.569 "method": "sock_set_default_impl", 00:05:42.569 "params": { 00:05:42.569 "impl_name": "posix" 00:05:42.569 } 00:05:42.569 }, 00:05:42.569 { 00:05:42.569 "method": "sock_impl_set_options", 00:05:42.569 "params": { 00:05:42.569 "impl_name": "ssl", 00:05:42.569 "recv_buf_size": 4096, 00:05:42.569 "send_buf_size": 4096, 00:05:42.569 "enable_recv_pipe": true, 00:05:42.569 "enable_quickack": false, 00:05:42.569 "enable_placement_id": 0, 00:05:42.569 "enable_zerocopy_send_server": true, 00:05:42.569 "enable_zerocopy_send_client": false, 00:05:42.569 "zerocopy_threshold": 0, 00:05:42.569 "tls_version": 0, 00:05:42.569 "enable_ktls": false 00:05:42.569 } 00:05:42.569 }, 00:05:42.569 { 00:05:42.569 "method": "sock_impl_set_options", 00:05:42.569 "params": { 00:05:42.569 "impl_name": "posix", 00:05:42.569 "recv_buf_size": 2097152, 00:05:42.569 "send_buf_size": 2097152, 00:05:42.569 "enable_recv_pipe": true, 00:05:42.569 "enable_quickack": false, 00:05:42.569 "enable_placement_id": 0, 00:05:42.569 "enable_zerocopy_send_server": true, 00:05:42.569 "enable_zerocopy_send_client": false, 00:05:42.569 "zerocopy_threshold": 0, 00:05:42.569 "tls_version": 0, 00:05:42.569 "enable_ktls": false 00:05:42.569 } 00:05:42.569 } 00:05:42.569 ] 00:05:42.569 }, 00:05:42.569 { 00:05:42.569 "subsystem": "vmd", 00:05:42.569 "config": [] 00:05:42.569 }, 00:05:42.569 { 00:05:42.569 "subsystem": "accel", 00:05:42.569 "config": [ 00:05:42.569 { 00:05:42.569 "method": "accel_set_options", 00:05:42.569 "params": { 00:05:42.569 "small_cache_size": 128, 00:05:42.569 "large_cache_size": 16, 00:05:42.569 "task_count": 2048, 00:05:42.569 "sequence_count": 2048, 00:05:42.569 "buf_count": 2048 00:05:42.569 } 00:05:42.569 } 00:05:42.569 ] 00:05:42.569 }, 00:05:42.569 { 00:05:42.569 "subsystem": "bdev", 00:05:42.569 "config": [ 00:05:42.569 { 00:05:42.569 "method": "bdev_set_options", 00:05:42.569 "params": { 00:05:42.569 "bdev_io_pool_size": 65535, 00:05:42.569 "bdev_io_cache_size": 256, 00:05:42.569 "bdev_auto_examine": true, 00:05:42.569 "iobuf_small_cache_size": 128, 00:05:42.569 "iobuf_large_cache_size": 16 00:05:42.569 } 00:05:42.569 }, 00:05:42.569 { 00:05:42.569 "method": "bdev_raid_set_options", 00:05:42.569 "params": { 00:05:42.569 "process_window_size_kb": 1024, 00:05:42.569 "process_max_bandwidth_mb_sec": 0 00:05:42.569 } 00:05:42.569 }, 00:05:42.569 { 00:05:42.569 "method": "bdev_iscsi_set_options", 00:05:42.569 "params": { 00:05:42.569 "timeout_sec": 30 00:05:42.569 } 00:05:42.569 }, 00:05:42.569 { 00:05:42.569 "method": "bdev_nvme_set_options", 00:05:42.569 "params": { 00:05:42.569 "action_on_timeout": "none", 00:05:42.569 "timeout_us": 0, 00:05:42.569 "timeout_admin_us": 0, 00:05:42.569 "keep_alive_timeout_ms": 10000, 00:05:42.569 "arbitration_burst": 0, 00:05:42.569 "low_priority_weight": 0, 00:05:42.569 "medium_priority_weight": 0, 00:05:42.569 "high_priority_weight": 0, 00:05:42.569 "nvme_adminq_poll_period_us": 10000, 00:05:42.569 "nvme_ioq_poll_period_us": 0, 00:05:42.569 "io_queue_requests": 0, 00:05:42.569 "delay_cmd_submit": true, 00:05:42.569 "transport_retry_count": 4, 00:05:42.569 "bdev_retry_count": 3, 00:05:42.569 "transport_ack_timeout": 0, 00:05:42.569 "ctrlr_loss_timeout_sec": 0, 00:05:42.569 "reconnect_delay_sec": 0, 00:05:42.570 "fast_io_fail_timeout_sec": 0, 00:05:42.570 "disable_auto_failback": false, 00:05:42.570 "generate_uuids": false, 00:05:42.570 "transport_tos": 0, 00:05:42.570 "nvme_error_stat": false, 00:05:42.570 "rdma_srq_size": 0, 00:05:42.570 "io_path_stat": false, 00:05:42.570 "allow_accel_sequence": false, 00:05:42.570 "rdma_max_cq_size": 0, 00:05:42.570 "rdma_cm_event_timeout_ms": 0, 00:05:42.570 "dhchap_digests": [ 00:05:42.570 "sha256", 00:05:42.570 "sha384", 00:05:42.570 "sha512" 00:05:42.570 ], 00:05:42.570 "dhchap_dhgroups": [ 00:05:42.570 "null", 00:05:42.570 "ffdhe2048", 00:05:42.570 "ffdhe3072", 00:05:42.570 "ffdhe4096", 00:05:42.570 "ffdhe6144", 00:05:42.570 "ffdhe8192" 00:05:42.570 ] 00:05:42.570 } 00:05:42.570 }, 00:05:42.570 { 00:05:42.570 "method": "bdev_nvme_set_hotplug", 00:05:42.570 "params": { 00:05:42.570 "period_us": 100000, 00:05:42.570 "enable": false 00:05:42.570 } 00:05:42.570 }, 00:05:42.570 { 00:05:42.570 "method": "bdev_wait_for_examine" 00:05:42.570 } 00:05:42.570 ] 00:05:42.570 }, 00:05:42.570 { 00:05:42.570 "subsystem": "scsi", 00:05:42.570 "config": null 00:05:42.570 }, 00:05:42.570 { 00:05:42.570 "subsystem": "scheduler", 00:05:42.570 "config": [ 00:05:42.570 { 00:05:42.570 "method": "framework_set_scheduler", 00:05:42.570 "params": { 00:05:42.570 "name": "static" 00:05:42.570 } 00:05:42.570 } 00:05:42.570 ] 00:05:42.570 }, 00:05:42.570 { 00:05:42.570 "subsystem": "vhost_scsi", 00:05:42.570 "config": [] 00:05:42.570 }, 00:05:42.570 { 00:05:42.570 "subsystem": "vhost_blk", 00:05:42.570 "config": [] 00:05:42.570 }, 00:05:42.570 { 00:05:42.570 "subsystem": "ublk", 00:05:42.570 "config": [] 00:05:42.570 }, 00:05:42.570 { 00:05:42.570 "subsystem": "nbd", 00:05:42.570 "config": [] 00:05:42.570 }, 00:05:42.570 { 00:05:42.570 "subsystem": "nvmf", 00:05:42.570 "config": [ 00:05:42.570 { 00:05:42.570 "method": "nvmf_set_config", 00:05:42.570 "params": { 00:05:42.570 "discovery_filter": "match_any", 00:05:42.570 "admin_cmd_passthru": { 00:05:42.570 "identify_ctrlr": false 00:05:42.570 }, 00:05:42.570 "dhchap_digests": [ 00:05:42.570 "sha256", 00:05:42.570 "sha384", 00:05:42.570 "sha512" 00:05:42.570 ], 00:05:42.570 "dhchap_dhgroups": [ 00:05:42.570 "null", 00:05:42.570 "ffdhe2048", 00:05:42.570 "ffdhe3072", 00:05:42.570 "ffdhe4096", 00:05:42.570 "ffdhe6144", 00:05:42.570 "ffdhe8192" 00:05:42.570 ] 00:05:42.570 } 00:05:42.570 }, 00:05:42.570 { 00:05:42.570 "method": "nvmf_set_max_subsystems", 00:05:42.570 "params": { 00:05:42.570 "max_subsystems": 1024 00:05:42.570 } 00:05:42.570 }, 00:05:42.570 { 00:05:42.570 "method": "nvmf_set_crdt", 00:05:42.570 "params": { 00:05:42.570 "crdt1": 0, 00:05:42.570 "crdt2": 0, 00:05:42.570 "crdt3": 0 00:05:42.570 } 00:05:42.570 }, 00:05:42.570 { 00:05:42.570 "method": "nvmf_create_transport", 00:05:42.570 "params": { 00:05:42.570 "trtype": "TCP", 00:05:42.570 "max_queue_depth": 128, 00:05:42.570 "max_io_qpairs_per_ctrlr": 127, 00:05:42.570 "in_capsule_data_size": 4096, 00:05:42.570 "max_io_size": 131072, 00:05:42.570 "io_unit_size": 131072, 00:05:42.570 "max_aq_depth": 128, 00:05:42.570 "num_shared_buffers": 511, 00:05:42.570 "buf_cache_size": 4294967295, 00:05:42.570 "dif_insert_or_strip": false, 00:05:42.570 "zcopy": false, 00:05:42.570 "c2h_success": true, 00:05:42.570 "sock_priority": 0, 00:05:42.570 "abort_timeout_sec": 1, 00:05:42.570 "ack_timeout": 0, 00:05:42.570 "data_wr_pool_size": 0 00:05:42.570 } 00:05:42.570 } 00:05:42.570 ] 00:05:42.570 }, 00:05:42.570 { 00:05:42.570 "subsystem": "iscsi", 00:05:42.570 "config": [ 00:05:42.570 { 00:05:42.570 "method": "iscsi_set_options", 00:05:42.570 "params": { 00:05:42.570 "node_base": "iqn.2016-06.io.spdk", 00:05:42.570 "max_sessions": 128, 00:05:42.570 "max_connections_per_session": 2, 00:05:42.570 "max_queue_depth": 64, 00:05:42.570 "default_time2wait": 2, 00:05:42.570 "default_time2retain": 20, 00:05:42.570 "first_burst_length": 8192, 00:05:42.570 "immediate_data": true, 00:05:42.570 "allow_duplicated_isid": false, 00:05:42.570 "error_recovery_level": 0, 00:05:42.570 "nop_timeout": 60, 00:05:42.570 "nop_in_interval": 30, 00:05:42.570 "disable_chap": false, 00:05:42.570 "require_chap": false, 00:05:42.570 "mutual_chap": false, 00:05:42.570 "chap_group": 0, 00:05:42.570 "max_large_datain_per_connection": 64, 00:05:42.570 "max_r2t_per_connection": 4, 00:05:42.570 "pdu_pool_size": 36864, 00:05:42.570 "immediate_data_pool_size": 16384, 00:05:42.570 "data_out_pool_size": 2048 00:05:42.570 } 00:05:42.570 } 00:05:42.570 ] 00:05:42.570 } 00:05:42.570 ] 00:05:42.570 } 00:05:42.570 08:49:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:42.570 08:49:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70215 00:05:42.570 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70215 ']' 00:05:42.570 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70215 00:05:42.570 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:42.570 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.570 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70215 00:05:42.570 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.570 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.570 killing process with pid 70215 00:05:42.570 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70215' 00:05:42.570 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70215 00:05:42.570 08:49:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70215 00:05:42.829 08:49:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70243 00:05:42.829 08:49:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:42.829 08:49:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:48.111 08:49:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70243 00:05:48.111 08:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70243 ']' 00:05:48.111 08:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70243 00:05:48.111 08:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:48.111 08:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.111 08:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70243 00:05:48.111 08:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.111 08:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.111 killing process with pid 70243 00:05:48.111 08:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70243' 00:05:48.111 08:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70243 00:05:48.111 08:49:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70243 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.111 00:05:48.111 real 0m6.607s 00:05:48.111 user 0m6.309s 00:05:48.111 sys 0m0.515s 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.111 ************************************ 00:05:48.111 END TEST skip_rpc_with_json 00:05:48.111 ************************************ 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.111 08:49:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:48.111 08:49:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.111 08:49:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.111 08:49:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.111 ************************************ 00:05:48.111 START TEST skip_rpc_with_delay 00:05:48.111 ************************************ 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.111 [2024-11-28 08:49:42.143130] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:48.111 [2024-11-28 08:49:42.143262] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.111 00:05:48.111 real 0m0.122s 00:05:48.111 user 0m0.066s 00:05:48.111 sys 0m0.055s 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.111 ************************************ 00:05:48.111 END TEST skip_rpc_with_delay 00:05:48.111 ************************************ 00:05:48.111 08:49:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:48.373 08:49:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:48.373 08:49:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:48.373 08:49:42 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:48.373 08:49:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.373 08:49:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.373 08:49:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.373 ************************************ 00:05:48.373 START TEST exit_on_failed_rpc_init 00:05:48.373 ************************************ 00:05:48.373 08:49:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:48.373 08:49:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70355 00:05:48.373 08:49:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70355 00:05:48.373 08:49:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 70355 ']' 00:05:48.373 08:49:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.373 08:49:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.373 08:49:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.373 08:49:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.373 08:49:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.373 08:49:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.373 [2024-11-28 08:49:42.329262] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:48.373 [2024-11-28 08:49:42.329791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70355 ] 00:05:48.373 [2024-11-28 08:49:42.482058] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.634 [2024-11-28 08:49:42.536848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:49.207 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.207 [2024-11-28 08:49:43.274374] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:49.207 [2024-11-28 08:49:43.274510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70367 ] 00:05:49.469 [2024-11-28 08:49:43.426231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.469 [2024-11-28 08:49:43.477564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.469 [2024-11-28 08:49:43.477682] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:49.469 [2024-11-28 08:49:43.477701] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:49.469 [2024-11-28 08:49:43.477718] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70355 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 70355 ']' 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 70355 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70355 00:05:49.730 killing process with pid 70355 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70355' 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 70355 00:05:49.730 08:49:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 70355 00:05:49.991 ************************************ 00:05:49.991 END TEST exit_on_failed_rpc_init 00:05:49.991 ************************************ 00:05:49.991 00:05:49.991 real 0m1.757s 00:05:49.991 user 0m1.903s 00:05:49.991 sys 0m0.481s 00:05:49.991 08:49:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.991 08:49:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.991 08:49:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.991 ************************************ 00:05:49.991 END TEST skip_rpc 00:05:49.991 ************************************ 00:05:49.991 00:05:49.991 real 0m14.151s 00:05:49.991 user 0m13.379s 00:05:49.991 sys 0m1.454s 00:05:49.991 08:49:44 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.991 08:49:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.991 08:49:44 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:49.991 08:49:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.991 08:49:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.991 08:49:44 -- common/autotest_common.sh@10 -- # set +x 00:05:49.991 ************************************ 00:05:49.991 START TEST rpc_client 00:05:49.991 ************************************ 00:05:49.991 08:49:44 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:50.252 * Looking for test storage... 00:05:50.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:50.252 08:49:44 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:50.252 08:49:44 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:50.252 08:49:44 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:50.252 08:49:44 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.252 08:49:44 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:50.252 08:49:44 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.252 08:49:44 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:50.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.252 --rc genhtml_branch_coverage=1 00:05:50.252 --rc genhtml_function_coverage=1 00:05:50.252 --rc genhtml_legend=1 00:05:50.252 --rc geninfo_all_blocks=1 00:05:50.252 --rc geninfo_unexecuted_blocks=1 00:05:50.252 00:05:50.252 ' 00:05:50.252 08:49:44 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:50.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.252 --rc genhtml_branch_coverage=1 00:05:50.252 --rc genhtml_function_coverage=1 00:05:50.252 --rc genhtml_legend=1 00:05:50.252 --rc geninfo_all_blocks=1 00:05:50.252 --rc geninfo_unexecuted_blocks=1 00:05:50.252 00:05:50.252 ' 00:05:50.252 08:49:44 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:50.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.252 --rc genhtml_branch_coverage=1 00:05:50.252 --rc genhtml_function_coverage=1 00:05:50.252 --rc genhtml_legend=1 00:05:50.252 --rc geninfo_all_blocks=1 00:05:50.252 --rc geninfo_unexecuted_blocks=1 00:05:50.252 00:05:50.252 ' 00:05:50.252 08:49:44 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:50.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.252 --rc genhtml_branch_coverage=1 00:05:50.252 --rc genhtml_function_coverage=1 00:05:50.252 --rc genhtml_legend=1 00:05:50.252 --rc geninfo_all_blocks=1 00:05:50.252 --rc geninfo_unexecuted_blocks=1 00:05:50.252 00:05:50.252 ' 00:05:50.252 08:49:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:50.252 OK 00:05:50.252 08:49:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:50.252 ************************************ 00:05:50.252 END TEST rpc_client 00:05:50.252 ************************************ 00:05:50.252 00:05:50.252 real 0m0.184s 00:05:50.252 user 0m0.112s 00:05:50.252 sys 0m0.078s 00:05:50.252 08:49:44 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.252 08:49:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:50.252 08:49:44 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:50.252 08:49:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.252 08:49:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.252 08:49:44 -- common/autotest_common.sh@10 -- # set +x 00:05:50.252 ************************************ 00:05:50.252 START TEST json_config 00:05:50.252 ************************************ 00:05:50.252 08:49:44 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:50.514 08:49:44 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:50.514 08:49:44 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:50.514 08:49:44 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:50.514 08:49:44 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:50.514 08:49:44 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.514 08:49:44 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.514 08:49:44 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.514 08:49:44 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.514 08:49:44 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.514 08:49:44 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.514 08:49:44 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.514 08:49:44 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.514 08:49:44 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.514 08:49:44 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.514 08:49:44 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.514 08:49:44 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:50.514 08:49:44 json_config -- scripts/common.sh@345 -- # : 1 00:05:50.514 08:49:44 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.514 08:49:44 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.514 08:49:44 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:50.514 08:49:44 json_config -- scripts/common.sh@353 -- # local d=1 00:05:50.514 08:49:44 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.514 08:49:44 json_config -- scripts/common.sh@355 -- # echo 1 00:05:50.514 08:49:44 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.514 08:49:44 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:50.514 08:49:44 json_config -- scripts/common.sh@353 -- # local d=2 00:05:50.514 08:49:44 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.514 08:49:44 json_config -- scripts/common.sh@355 -- # echo 2 00:05:50.514 08:49:44 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.514 08:49:44 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.514 08:49:44 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.514 08:49:44 json_config -- scripts/common.sh@368 -- # return 0 00:05:50.514 08:49:44 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.514 08:49:44 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:50.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.514 --rc genhtml_branch_coverage=1 00:05:50.514 --rc genhtml_function_coverage=1 00:05:50.514 --rc genhtml_legend=1 00:05:50.514 --rc geninfo_all_blocks=1 00:05:50.514 --rc geninfo_unexecuted_blocks=1 00:05:50.514 00:05:50.514 ' 00:05:50.514 08:49:44 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:50.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.514 --rc genhtml_branch_coverage=1 00:05:50.514 --rc genhtml_function_coverage=1 00:05:50.514 --rc genhtml_legend=1 00:05:50.514 --rc geninfo_all_blocks=1 00:05:50.514 --rc geninfo_unexecuted_blocks=1 00:05:50.514 00:05:50.514 ' 00:05:50.514 08:49:44 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:50.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.514 --rc genhtml_branch_coverage=1 00:05:50.514 --rc genhtml_function_coverage=1 00:05:50.514 --rc genhtml_legend=1 00:05:50.514 --rc geninfo_all_blocks=1 00:05:50.514 --rc geninfo_unexecuted_blocks=1 00:05:50.514 00:05:50.514 ' 00:05:50.514 08:49:44 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:50.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.514 --rc genhtml_branch_coverage=1 00:05:50.514 --rc genhtml_function_coverage=1 00:05:50.514 --rc genhtml_legend=1 00:05:50.514 --rc geninfo_all_blocks=1 00:05:50.514 --rc geninfo_unexecuted_blocks=1 00:05:50.514 00:05:50.514 ' 00:05:50.514 08:49:44 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:189c006a-5f8e-491e-b60d-b5b66b03007e 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=189c006a-5f8e-491e-b60d-b5b66b03007e 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:50.514 08:49:44 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.514 08:49:44 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.514 08:49:44 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.514 08:49:44 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.514 08:49:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.514 08:49:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.514 08:49:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.514 08:49:44 json_config -- paths/export.sh@5 -- # export PATH 00:05:50.514 08:49:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@51 -- # : 0 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.514 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.514 08:49:44 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.514 08:49:44 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:50.514 08:49:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:50.515 08:49:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:50.515 08:49:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:50.515 WARNING: No tests are enabled so not running JSON configuration tests 00:05:50.515 08:49:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:50.515 08:49:44 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:50.515 08:49:44 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:50.515 00:05:50.515 real 0m0.142s 00:05:50.515 user 0m0.086s 00:05:50.515 sys 0m0.055s 00:05:50.515 08:49:44 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.515 08:49:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.515 ************************************ 00:05:50.515 END TEST json_config 00:05:50.515 ************************************ 00:05:50.515 08:49:44 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:50.515 08:49:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.515 08:49:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.515 08:49:44 -- common/autotest_common.sh@10 -- # set +x 00:05:50.515 ************************************ 00:05:50.515 START TEST json_config_extra_key 00:05:50.515 ************************************ 00:05:50.515 08:49:44 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:50.515 08:49:44 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:50.515 08:49:44 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:50.515 08:49:44 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:50.775 08:49:44 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:50.775 08:49:44 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.775 08:49:44 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.775 08:49:44 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.775 08:49:44 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.775 08:49:44 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.775 08:49:44 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.775 08:49:44 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.775 08:49:44 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.775 08:49:44 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.775 08:49:44 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:50.776 08:49:44 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.776 08:49:44 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:50.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.776 --rc genhtml_branch_coverage=1 00:05:50.776 --rc genhtml_function_coverage=1 00:05:50.776 --rc genhtml_legend=1 00:05:50.776 --rc geninfo_all_blocks=1 00:05:50.776 --rc geninfo_unexecuted_blocks=1 00:05:50.776 00:05:50.776 ' 00:05:50.776 08:49:44 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:50.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.776 --rc genhtml_branch_coverage=1 00:05:50.776 --rc genhtml_function_coverage=1 00:05:50.776 --rc genhtml_legend=1 00:05:50.776 --rc geninfo_all_blocks=1 00:05:50.776 --rc geninfo_unexecuted_blocks=1 00:05:50.776 00:05:50.776 ' 00:05:50.776 08:49:44 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:50.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.776 --rc genhtml_branch_coverage=1 00:05:50.776 --rc genhtml_function_coverage=1 00:05:50.776 --rc genhtml_legend=1 00:05:50.776 --rc geninfo_all_blocks=1 00:05:50.776 --rc geninfo_unexecuted_blocks=1 00:05:50.776 00:05:50.776 ' 00:05:50.776 08:49:44 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:50.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.776 --rc genhtml_branch_coverage=1 00:05:50.776 --rc genhtml_function_coverage=1 00:05:50.776 --rc genhtml_legend=1 00:05:50.776 --rc geninfo_all_blocks=1 00:05:50.776 --rc geninfo_unexecuted_blocks=1 00:05:50.776 00:05:50.776 ' 00:05:50.776 08:49:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:189c006a-5f8e-491e-b60d-b5b66b03007e 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=189c006a-5f8e-491e-b60d-b5b66b03007e 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.776 08:49:44 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.776 08:49:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.776 08:49:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.776 08:49:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.776 08:49:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:50.776 08:49:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.776 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.776 08:49:44 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.776 08:49:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:50.776 08:49:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:50.776 08:49:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:50.776 08:49:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:50.776 08:49:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:50.776 08:49:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:50.776 08:49:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:50.776 08:49:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:50.776 08:49:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:50.776 08:49:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:50.776 INFO: launching applications... 00:05:50.776 08:49:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:50.776 08:49:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:50.776 08:49:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:50.776 08:49:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:50.776 08:49:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.776 08:49:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.776 Waiting for target to run... 00:05:50.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.776 08:49:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.776 08:49:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.776 08:49:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.776 08:49:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70550 00:05:50.776 08:49:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.776 08:49:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70550 /var/tmp/spdk_tgt.sock 00:05:50.776 08:49:44 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70550 ']' 00:05:50.776 08:49:44 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.776 08:49:44 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.776 08:49:44 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.776 08:49:44 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.777 08:49:44 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:50.777 08:49:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:50.777 [2024-11-28 08:49:44.758506] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:50.777 [2024-11-28 08:49:44.758865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70550 ] 00:05:51.036 [2024-11-28 08:49:45.116050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.036 [2024-11-28 08:49:45.145182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.607 08:49:45 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.607 08:49:45 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:51.607 00:05:51.607 08:49:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:51.607 08:49:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:51.607 INFO: shutting down applications... 00:05:51.607 08:49:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:51.607 08:49:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:51.607 08:49:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:51.607 08:49:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70550 ]] 00:05:51.607 08:49:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70550 00:05:51.607 08:49:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:51.607 08:49:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.607 08:49:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70550 00:05:51.607 08:49:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.181 08:49:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.181 08:49:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.181 08:49:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70550 00:05:52.181 08:49:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:52.181 08:49:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:52.181 08:49:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:52.181 08:49:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:52.181 SPDK target shutdown done 00:05:52.181 Success 00:05:52.181 08:49:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:52.181 00:05:52.181 real 0m1.596s 00:05:52.181 user 0m1.290s 00:05:52.181 sys 0m0.440s 00:05:52.181 ************************************ 00:05:52.181 END TEST json_config_extra_key 00:05:52.181 ************************************ 00:05:52.181 08:49:46 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.181 08:49:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:52.181 08:49:46 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.181 08:49:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.181 08:49:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.181 08:49:46 -- common/autotest_common.sh@10 -- # set +x 00:05:52.181 ************************************ 00:05:52.181 START TEST alias_rpc 00:05:52.181 ************************************ 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.181 * Looking for test storage... 00:05:52.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.181 08:49:46 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:52.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.181 --rc genhtml_branch_coverage=1 00:05:52.181 --rc genhtml_function_coverage=1 00:05:52.181 --rc genhtml_legend=1 00:05:52.181 --rc geninfo_all_blocks=1 00:05:52.181 --rc geninfo_unexecuted_blocks=1 00:05:52.181 00:05:52.181 ' 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:52.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.181 --rc genhtml_branch_coverage=1 00:05:52.181 --rc genhtml_function_coverage=1 00:05:52.181 --rc genhtml_legend=1 00:05:52.181 --rc geninfo_all_blocks=1 00:05:52.181 --rc geninfo_unexecuted_blocks=1 00:05:52.181 00:05:52.181 ' 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:52.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.181 --rc genhtml_branch_coverage=1 00:05:52.181 --rc genhtml_function_coverage=1 00:05:52.181 --rc genhtml_legend=1 00:05:52.181 --rc geninfo_all_blocks=1 00:05:52.181 --rc geninfo_unexecuted_blocks=1 00:05:52.181 00:05:52.181 ' 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:52.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.181 --rc genhtml_branch_coverage=1 00:05:52.181 --rc genhtml_function_coverage=1 00:05:52.181 --rc genhtml_legend=1 00:05:52.181 --rc geninfo_all_blocks=1 00:05:52.181 --rc geninfo_unexecuted_blocks=1 00:05:52.181 00:05:52.181 ' 00:05:52.181 08:49:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:52.181 08:49:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70628 00:05:52.181 08:49:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70628 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70628 ']' 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.181 08:49:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.181 08:49:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.443 [2024-11-28 08:49:46.369588] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:52.443 [2024-11-28 08:49:46.369840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70628 ] 00:05:52.443 [2024-11-28 08:49:46.519556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.443 [2024-11-28 08:49:46.550936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.385 08:49:47 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.385 08:49:47 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:53.385 08:49:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:53.385 08:49:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70628 00:05:53.385 08:49:47 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70628 ']' 00:05:53.385 08:49:47 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70628 00:05:53.385 08:49:47 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:53.385 08:49:47 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.385 08:49:47 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70628 00:05:53.385 killing process with pid 70628 00:05:53.385 08:49:47 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.385 08:49:47 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.385 08:49:47 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70628' 00:05:53.385 08:49:47 alias_rpc -- common/autotest_common.sh@969 -- # kill 70628 00:05:53.385 08:49:47 alias_rpc -- common/autotest_common.sh@974 -- # wait 70628 00:05:53.647 ************************************ 00:05:53.647 END TEST alias_rpc 00:05:53.647 ************************************ 00:05:53.647 00:05:53.647 real 0m1.522s 00:05:53.647 user 0m1.645s 00:05:53.647 sys 0m0.357s 00:05:53.647 08:49:47 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.647 08:49:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.647 08:49:47 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:53.647 08:49:47 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:53.647 08:49:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.647 08:49:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.647 08:49:47 -- common/autotest_common.sh@10 -- # set +x 00:05:53.647 ************************************ 00:05:53.647 START TEST spdkcli_tcp 00:05:53.647 ************************************ 00:05:53.647 08:49:47 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:53.907 * Looking for test storage... 00:05:53.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.908 08:49:47 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:53.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.908 --rc genhtml_branch_coverage=1 00:05:53.908 --rc genhtml_function_coverage=1 00:05:53.908 --rc genhtml_legend=1 00:05:53.908 --rc geninfo_all_blocks=1 00:05:53.908 --rc geninfo_unexecuted_blocks=1 00:05:53.908 00:05:53.908 ' 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:53.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.908 --rc genhtml_branch_coverage=1 00:05:53.908 --rc genhtml_function_coverage=1 00:05:53.908 --rc genhtml_legend=1 00:05:53.908 --rc geninfo_all_blocks=1 00:05:53.908 --rc geninfo_unexecuted_blocks=1 00:05:53.908 00:05:53.908 ' 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:53.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.908 --rc genhtml_branch_coverage=1 00:05:53.908 --rc genhtml_function_coverage=1 00:05:53.908 --rc genhtml_legend=1 00:05:53.908 --rc geninfo_all_blocks=1 00:05:53.908 --rc geninfo_unexecuted_blocks=1 00:05:53.908 00:05:53.908 ' 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:53.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.908 --rc genhtml_branch_coverage=1 00:05:53.908 --rc genhtml_function_coverage=1 00:05:53.908 --rc genhtml_legend=1 00:05:53.908 --rc geninfo_all_blocks=1 00:05:53.908 --rc geninfo_unexecuted_blocks=1 00:05:53.908 00:05:53.908 ' 00:05:53.908 08:49:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:53.908 08:49:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:53.908 08:49:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:53.908 08:49:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:53.908 08:49:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:53.908 08:49:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:53.908 08:49:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.908 08:49:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70708 00:05:53.908 08:49:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70708 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70708 ']' 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.908 08:49:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.908 08:49:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:53.908 [2024-11-28 08:49:47.927033] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:53.908 [2024-11-28 08:49:47.927145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70708 ] 00:05:54.169 [2024-11-28 08:49:48.067647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.169 [2024-11-28 08:49:48.102589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.169 [2024-11-28 08:49:48.102669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.740 08:49:48 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.740 08:49:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:54.740 08:49:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:54.740 08:49:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70720 00:05:54.740 08:49:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:55.004 [ 00:05:55.004 "bdev_malloc_delete", 00:05:55.004 "bdev_malloc_create", 00:05:55.004 "bdev_null_resize", 00:05:55.004 "bdev_null_delete", 00:05:55.004 "bdev_null_create", 00:05:55.004 "bdev_nvme_cuse_unregister", 00:05:55.004 "bdev_nvme_cuse_register", 00:05:55.004 "bdev_opal_new_user", 00:05:55.004 "bdev_opal_set_lock_state", 00:05:55.004 "bdev_opal_delete", 00:05:55.004 "bdev_opal_get_info", 00:05:55.004 "bdev_opal_create", 00:05:55.004 "bdev_nvme_opal_revert", 00:05:55.004 "bdev_nvme_opal_init", 00:05:55.004 "bdev_nvme_send_cmd", 00:05:55.004 "bdev_nvme_set_keys", 00:05:55.004 "bdev_nvme_get_path_iostat", 00:05:55.004 "bdev_nvme_get_mdns_discovery_info", 00:05:55.004 "bdev_nvme_stop_mdns_discovery", 00:05:55.004 "bdev_nvme_start_mdns_discovery", 00:05:55.004 "bdev_nvme_set_multipath_policy", 00:05:55.004 "bdev_nvme_set_preferred_path", 00:05:55.004 "bdev_nvme_get_io_paths", 00:05:55.004 "bdev_nvme_remove_error_injection", 00:05:55.004 "bdev_nvme_add_error_injection", 00:05:55.004 "bdev_nvme_get_discovery_info", 00:05:55.004 "bdev_nvme_stop_discovery", 00:05:55.004 "bdev_nvme_start_discovery", 00:05:55.004 "bdev_nvme_get_controller_health_info", 00:05:55.004 "bdev_nvme_disable_controller", 00:05:55.004 "bdev_nvme_enable_controller", 00:05:55.004 "bdev_nvme_reset_controller", 00:05:55.004 "bdev_nvme_get_transport_statistics", 00:05:55.004 "bdev_nvme_apply_firmware", 00:05:55.004 "bdev_nvme_detach_controller", 00:05:55.004 "bdev_nvme_get_controllers", 00:05:55.004 "bdev_nvme_attach_controller", 00:05:55.004 "bdev_nvme_set_hotplug", 00:05:55.004 "bdev_nvme_set_options", 00:05:55.004 "bdev_passthru_delete", 00:05:55.004 "bdev_passthru_create", 00:05:55.004 "bdev_lvol_set_parent_bdev", 00:05:55.004 "bdev_lvol_set_parent", 00:05:55.004 "bdev_lvol_check_shallow_copy", 00:05:55.004 "bdev_lvol_start_shallow_copy", 00:05:55.004 "bdev_lvol_grow_lvstore", 00:05:55.004 "bdev_lvol_get_lvols", 00:05:55.004 "bdev_lvol_get_lvstores", 00:05:55.004 "bdev_lvol_delete", 00:05:55.004 "bdev_lvol_set_read_only", 00:05:55.004 "bdev_lvol_resize", 00:05:55.004 "bdev_lvol_decouple_parent", 00:05:55.004 "bdev_lvol_inflate", 00:05:55.004 "bdev_lvol_rename", 00:05:55.004 "bdev_lvol_clone_bdev", 00:05:55.004 "bdev_lvol_clone", 00:05:55.004 "bdev_lvol_snapshot", 00:05:55.004 "bdev_lvol_create", 00:05:55.004 "bdev_lvol_delete_lvstore", 00:05:55.004 "bdev_lvol_rename_lvstore", 00:05:55.004 "bdev_lvol_create_lvstore", 00:05:55.004 "bdev_raid_set_options", 00:05:55.004 "bdev_raid_remove_base_bdev", 00:05:55.004 "bdev_raid_add_base_bdev", 00:05:55.004 "bdev_raid_delete", 00:05:55.004 "bdev_raid_create", 00:05:55.004 "bdev_raid_get_bdevs", 00:05:55.004 "bdev_error_inject_error", 00:05:55.004 "bdev_error_delete", 00:05:55.004 "bdev_error_create", 00:05:55.004 "bdev_split_delete", 00:05:55.004 "bdev_split_create", 00:05:55.004 "bdev_delay_delete", 00:05:55.004 "bdev_delay_create", 00:05:55.004 "bdev_delay_update_latency", 00:05:55.004 "bdev_zone_block_delete", 00:05:55.004 "bdev_zone_block_create", 00:05:55.004 "blobfs_create", 00:05:55.004 "blobfs_detect", 00:05:55.004 "blobfs_set_cache_size", 00:05:55.004 "bdev_xnvme_delete", 00:05:55.004 "bdev_xnvme_create", 00:05:55.004 "bdev_aio_delete", 00:05:55.004 "bdev_aio_rescan", 00:05:55.004 "bdev_aio_create", 00:05:55.004 "bdev_ftl_set_property", 00:05:55.004 "bdev_ftl_get_properties", 00:05:55.004 "bdev_ftl_get_stats", 00:05:55.004 "bdev_ftl_unmap", 00:05:55.004 "bdev_ftl_unload", 00:05:55.004 "bdev_ftl_delete", 00:05:55.004 "bdev_ftl_load", 00:05:55.004 "bdev_ftl_create", 00:05:55.004 "bdev_virtio_attach_controller", 00:05:55.004 "bdev_virtio_scsi_get_devices", 00:05:55.004 "bdev_virtio_detach_controller", 00:05:55.004 "bdev_virtio_blk_set_hotplug", 00:05:55.004 "bdev_iscsi_delete", 00:05:55.004 "bdev_iscsi_create", 00:05:55.004 "bdev_iscsi_set_options", 00:05:55.004 "accel_error_inject_error", 00:05:55.004 "ioat_scan_accel_module", 00:05:55.004 "dsa_scan_accel_module", 00:05:55.004 "iaa_scan_accel_module", 00:05:55.004 "keyring_file_remove_key", 00:05:55.004 "keyring_file_add_key", 00:05:55.004 "keyring_linux_set_options", 00:05:55.004 "fsdev_aio_delete", 00:05:55.004 "fsdev_aio_create", 00:05:55.004 "iscsi_get_histogram", 00:05:55.004 "iscsi_enable_histogram", 00:05:55.004 "iscsi_set_options", 00:05:55.004 "iscsi_get_auth_groups", 00:05:55.004 "iscsi_auth_group_remove_secret", 00:05:55.004 "iscsi_auth_group_add_secret", 00:05:55.004 "iscsi_delete_auth_group", 00:05:55.004 "iscsi_create_auth_group", 00:05:55.004 "iscsi_set_discovery_auth", 00:05:55.004 "iscsi_get_options", 00:05:55.004 "iscsi_target_node_request_logout", 00:05:55.004 "iscsi_target_node_set_redirect", 00:05:55.004 "iscsi_target_node_set_auth", 00:05:55.004 "iscsi_target_node_add_lun", 00:05:55.004 "iscsi_get_stats", 00:05:55.004 "iscsi_get_connections", 00:05:55.004 "iscsi_portal_group_set_auth", 00:05:55.004 "iscsi_start_portal_group", 00:05:55.004 "iscsi_delete_portal_group", 00:05:55.004 "iscsi_create_portal_group", 00:05:55.004 "iscsi_get_portal_groups", 00:05:55.005 "iscsi_delete_target_node", 00:05:55.005 "iscsi_target_node_remove_pg_ig_maps", 00:05:55.005 "iscsi_target_node_add_pg_ig_maps", 00:05:55.005 "iscsi_create_target_node", 00:05:55.005 "iscsi_get_target_nodes", 00:05:55.005 "iscsi_delete_initiator_group", 00:05:55.005 "iscsi_initiator_group_remove_initiators", 00:05:55.005 "iscsi_initiator_group_add_initiators", 00:05:55.005 "iscsi_create_initiator_group", 00:05:55.005 "iscsi_get_initiator_groups", 00:05:55.005 "nvmf_set_crdt", 00:05:55.005 "nvmf_set_config", 00:05:55.005 "nvmf_set_max_subsystems", 00:05:55.005 "nvmf_stop_mdns_prr", 00:05:55.005 "nvmf_publish_mdns_prr", 00:05:55.005 "nvmf_subsystem_get_listeners", 00:05:55.005 "nvmf_subsystem_get_qpairs", 00:05:55.005 "nvmf_subsystem_get_controllers", 00:05:55.005 "nvmf_get_stats", 00:05:55.005 "nvmf_get_transports", 00:05:55.005 "nvmf_create_transport", 00:05:55.005 "nvmf_get_targets", 00:05:55.005 "nvmf_delete_target", 00:05:55.005 "nvmf_create_target", 00:05:55.005 "nvmf_subsystem_allow_any_host", 00:05:55.005 "nvmf_subsystem_set_keys", 00:05:55.005 "nvmf_subsystem_remove_host", 00:05:55.005 "nvmf_subsystem_add_host", 00:05:55.005 "nvmf_ns_remove_host", 00:05:55.005 "nvmf_ns_add_host", 00:05:55.005 "nvmf_subsystem_remove_ns", 00:05:55.005 "nvmf_subsystem_set_ns_ana_group", 00:05:55.005 "nvmf_subsystem_add_ns", 00:05:55.005 "nvmf_subsystem_listener_set_ana_state", 00:05:55.005 "nvmf_discovery_get_referrals", 00:05:55.005 "nvmf_discovery_remove_referral", 00:05:55.005 "nvmf_discovery_add_referral", 00:05:55.005 "nvmf_subsystem_remove_listener", 00:05:55.005 "nvmf_subsystem_add_listener", 00:05:55.005 "nvmf_delete_subsystem", 00:05:55.005 "nvmf_create_subsystem", 00:05:55.005 "nvmf_get_subsystems", 00:05:55.005 "env_dpdk_get_mem_stats", 00:05:55.005 "nbd_get_disks", 00:05:55.005 "nbd_stop_disk", 00:05:55.005 "nbd_start_disk", 00:05:55.005 "ublk_recover_disk", 00:05:55.005 "ublk_get_disks", 00:05:55.005 "ublk_stop_disk", 00:05:55.005 "ublk_start_disk", 00:05:55.005 "ublk_destroy_target", 00:05:55.005 "ublk_create_target", 00:05:55.005 "virtio_blk_create_transport", 00:05:55.005 "virtio_blk_get_transports", 00:05:55.005 "vhost_controller_set_coalescing", 00:05:55.005 "vhost_get_controllers", 00:05:55.005 "vhost_delete_controller", 00:05:55.005 "vhost_create_blk_controller", 00:05:55.005 "vhost_scsi_controller_remove_target", 00:05:55.005 "vhost_scsi_controller_add_target", 00:05:55.005 "vhost_start_scsi_controller", 00:05:55.005 "vhost_create_scsi_controller", 00:05:55.005 "thread_set_cpumask", 00:05:55.005 "scheduler_set_options", 00:05:55.005 "framework_get_governor", 00:05:55.005 "framework_get_scheduler", 00:05:55.005 "framework_set_scheduler", 00:05:55.005 "framework_get_reactors", 00:05:55.005 "thread_get_io_channels", 00:05:55.005 "thread_get_pollers", 00:05:55.005 "thread_get_stats", 00:05:55.005 "framework_monitor_context_switch", 00:05:55.005 "spdk_kill_instance", 00:05:55.005 "log_enable_timestamps", 00:05:55.005 "log_get_flags", 00:05:55.005 "log_clear_flag", 00:05:55.005 "log_set_flag", 00:05:55.005 "log_get_level", 00:05:55.005 "log_set_level", 00:05:55.005 "log_get_print_level", 00:05:55.005 "log_set_print_level", 00:05:55.005 "framework_enable_cpumask_locks", 00:05:55.005 "framework_disable_cpumask_locks", 00:05:55.005 "framework_wait_init", 00:05:55.005 "framework_start_init", 00:05:55.005 "scsi_get_devices", 00:05:55.005 "bdev_get_histogram", 00:05:55.005 "bdev_enable_histogram", 00:05:55.005 "bdev_set_qos_limit", 00:05:55.005 "bdev_set_qd_sampling_period", 00:05:55.005 "bdev_get_bdevs", 00:05:55.005 "bdev_reset_iostat", 00:05:55.005 "bdev_get_iostat", 00:05:55.005 "bdev_examine", 00:05:55.005 "bdev_wait_for_examine", 00:05:55.005 "bdev_set_options", 00:05:55.005 "accel_get_stats", 00:05:55.005 "accel_set_options", 00:05:55.005 "accel_set_driver", 00:05:55.005 "accel_crypto_key_destroy", 00:05:55.005 "accel_crypto_keys_get", 00:05:55.005 "accel_crypto_key_create", 00:05:55.005 "accel_assign_opc", 00:05:55.005 "accel_get_module_info", 00:05:55.005 "accel_get_opc_assignments", 00:05:55.005 "vmd_rescan", 00:05:55.005 "vmd_remove_device", 00:05:55.005 "vmd_enable", 00:05:55.005 "sock_get_default_impl", 00:05:55.005 "sock_set_default_impl", 00:05:55.005 "sock_impl_set_options", 00:05:55.005 "sock_impl_get_options", 00:05:55.005 "iobuf_get_stats", 00:05:55.005 "iobuf_set_options", 00:05:55.005 "keyring_get_keys", 00:05:55.005 "framework_get_pci_devices", 00:05:55.005 "framework_get_config", 00:05:55.005 "framework_get_subsystems", 00:05:55.005 "fsdev_set_opts", 00:05:55.005 "fsdev_get_opts", 00:05:55.005 "trace_get_info", 00:05:55.005 "trace_get_tpoint_group_mask", 00:05:55.005 "trace_disable_tpoint_group", 00:05:55.005 "trace_enable_tpoint_group", 00:05:55.005 "trace_clear_tpoint_mask", 00:05:55.005 "trace_set_tpoint_mask", 00:05:55.005 "notify_get_notifications", 00:05:55.005 "notify_get_types", 00:05:55.005 "spdk_get_version", 00:05:55.005 "rpc_get_methods" 00:05:55.005 ] 00:05:55.005 08:49:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:55.005 08:49:48 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:55.005 08:49:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.005 08:49:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:55.005 08:49:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70708 00:05:55.005 08:49:48 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70708 ']' 00:05:55.005 08:49:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70708 00:05:55.005 08:49:48 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:55.005 08:49:48 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.005 08:49:48 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70708 00:05:55.005 killing process with pid 70708 00:05:55.005 08:49:48 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.005 08:49:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.005 08:49:48 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70708' 00:05:55.005 08:49:48 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70708 00:05:55.005 08:49:49 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70708 00:05:55.293 00:05:55.293 real 0m1.542s 00:05:55.293 user 0m2.743s 00:05:55.293 sys 0m0.385s 00:05:55.293 08:49:49 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.293 08:49:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.293 ************************************ 00:05:55.293 END TEST spdkcli_tcp 00:05:55.293 ************************************ 00:05:55.293 08:49:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:55.293 08:49:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.293 08:49:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.293 08:49:49 -- common/autotest_common.sh@10 -- # set +x 00:05:55.293 ************************************ 00:05:55.293 START TEST dpdk_mem_utility 00:05:55.293 ************************************ 00:05:55.293 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:55.293 * Looking for test storage... 00:05:55.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:55.293 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:55.293 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:55.293 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:55.560 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:55.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.560 08:49:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:55.560 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.560 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:55.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.560 --rc genhtml_branch_coverage=1 00:05:55.560 --rc genhtml_function_coverage=1 00:05:55.560 --rc genhtml_legend=1 00:05:55.560 --rc geninfo_all_blocks=1 00:05:55.560 --rc geninfo_unexecuted_blocks=1 00:05:55.560 00:05:55.560 ' 00:05:55.560 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:55.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.560 --rc genhtml_branch_coverage=1 00:05:55.560 --rc genhtml_function_coverage=1 00:05:55.560 --rc genhtml_legend=1 00:05:55.560 --rc geninfo_all_blocks=1 00:05:55.560 --rc geninfo_unexecuted_blocks=1 00:05:55.560 00:05:55.560 ' 00:05:55.560 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:55.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.560 --rc genhtml_branch_coverage=1 00:05:55.560 --rc genhtml_function_coverage=1 00:05:55.560 --rc genhtml_legend=1 00:05:55.560 --rc geninfo_all_blocks=1 00:05:55.560 --rc geninfo_unexecuted_blocks=1 00:05:55.560 00:05:55.560 ' 00:05:55.560 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:55.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.560 --rc genhtml_branch_coverage=1 00:05:55.560 --rc genhtml_function_coverage=1 00:05:55.560 --rc genhtml_legend=1 00:05:55.560 --rc geninfo_all_blocks=1 00:05:55.560 --rc geninfo_unexecuted_blocks=1 00:05:55.560 00:05:55.560 ' 00:05:55.560 08:49:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:55.560 08:49:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70803 00:05:55.560 08:49:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70803 00:05:55.560 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70803 ']' 00:05:55.560 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.560 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.560 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.560 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.560 08:49:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.560 08:49:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:55.560 [2024-11-28 08:49:49.509216] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:55.560 [2024-11-28 08:49:49.509343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70803 ] 00:05:55.560 [2024-11-28 08:49:49.662365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.821 [2024-11-28 08:49:49.704276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.395 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.395 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:56.395 08:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:56.395 08:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:56.395 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.395 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:56.395 { 00:05:56.395 "filename": "/tmp/spdk_mem_dump.txt" 00:05:56.395 } 00:05:56.395 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.395 08:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:56.395 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:56.395 1 heaps totaling size 860.000000 MiB 00:05:56.395 size: 860.000000 MiB heap id: 0 00:05:56.395 end heaps---------- 00:05:56.395 9 mempools totaling size 642.649841 MiB 00:05:56.395 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:56.395 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:56.395 size: 92.545471 MiB name: bdev_io_70803 00:05:56.395 size: 51.011292 MiB name: evtpool_70803 00:05:56.395 size: 50.003479 MiB name: msgpool_70803 00:05:56.395 size: 36.509338 MiB name: fsdev_io_70803 00:05:56.395 size: 21.763794 MiB name: PDU_Pool 00:05:56.395 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:56.395 size: 0.026123 MiB name: Session_Pool 00:05:56.395 end mempools------- 00:05:56.395 6 memzones totaling size 4.142822 MiB 00:05:56.395 size: 1.000366 MiB name: RG_ring_0_70803 00:05:56.395 size: 1.000366 MiB name: RG_ring_1_70803 00:05:56.395 size: 1.000366 MiB name: RG_ring_4_70803 00:05:56.395 size: 1.000366 MiB name: RG_ring_5_70803 00:05:56.395 size: 0.125366 MiB name: RG_ring_2_70803 00:05:56.395 size: 0.015991 MiB name: RG_ring_3_70803 00:05:56.395 end memzones------- 00:05:56.395 08:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:56.395 heap id: 0 total size: 860.000000 MiB number of busy elements: 302 number of free elements: 16 00:05:56.395 list of free elements. size: 13.937439 MiB 00:05:56.395 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:56.395 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:56.395 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:56.395 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:56.395 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:56.395 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:56.395 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:56.395 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:56.395 element at address: 0x200000200000 with size: 0.834839 MiB 00:05:56.395 element at address: 0x20001d800000 with size: 0.568237 MiB 00:05:56.395 element at address: 0x20000d800000 with size: 0.489258 MiB 00:05:56.395 element at address: 0x200003e00000 with size: 0.488831 MiB 00:05:56.395 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:56.395 element at address: 0x200007000000 with size: 0.480469 MiB 00:05:56.395 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:56.395 element at address: 0x200003a00000 with size: 0.353027 MiB 00:05:56.395 list of standard malloc elements. size: 199.265869 MiB 00:05:56.395 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:56.395 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:56.395 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:56.395 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:56.395 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:56.395 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:56.395 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:56.395 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:56.395 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:56.395 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:56.395 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:56.396 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:56.396 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:56.396 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:56.397 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:56.397 list of memzone associated elements. size: 646.796692 MiB 00:05:56.397 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:56.397 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:56.397 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:56.397 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:56.397 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:56.397 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70803_0 00:05:56.397 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:56.397 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70803_0 00:05:56.397 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:56.397 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70803_0 00:05:56.397 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:56.397 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70803_0 00:05:56.397 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:56.397 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:56.397 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:56.397 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:56.397 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:56.397 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70803 00:05:56.397 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:56.397 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70803 00:05:56.397 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:56.397 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70803 00:05:56.397 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:56.397 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:56.397 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:56.397 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:56.398 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:56.398 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:56.398 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:56.398 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:56.398 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:56.398 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70803 00:05:56.398 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:56.398 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70803 00:05:56.398 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:56.398 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70803 00:05:56.398 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:56.398 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70803 00:05:56.398 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:56.398 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70803 00:05:56.398 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:56.398 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70803 00:05:56.398 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:56.398 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:56.398 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:56.398 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:56.398 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:56.398 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:56.398 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:05:56.398 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70803 00:05:56.398 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:56.398 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:56.398 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:56.398 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:56.398 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:05:56.398 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70803 00:05:56.398 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:56.398 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:56.398 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:56.398 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70803 00:05:56.398 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:56.398 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70803 00:05:56.398 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:05:56.398 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70803 00:05:56.398 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:56.398 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:56.398 08:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:56.398 08:49:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70803 00:05:56.398 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70803 ']' 00:05:56.398 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70803 00:05:56.398 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:56.398 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.398 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70803 00:05:56.398 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.398 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.398 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70803' 00:05:56.398 killing process with pid 70803 00:05:56.398 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70803 00:05:56.398 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70803 00:05:56.659 00:05:56.659 real 0m1.439s 00:05:56.659 user 0m1.472s 00:05:56.659 sys 0m0.381s 00:05:56.659 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.659 08:49:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:56.659 ************************************ 00:05:56.659 END TEST dpdk_mem_utility 00:05:56.659 ************************************ 00:05:56.659 08:49:50 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:56.659 08:49:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.659 08:49:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.659 08:49:50 -- common/autotest_common.sh@10 -- # set +x 00:05:56.919 ************************************ 00:05:56.919 START TEST event 00:05:56.919 ************************************ 00:05:56.919 08:49:50 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:56.919 * Looking for test storage... 00:05:56.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:56.919 08:49:50 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:56.919 08:49:50 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:56.919 08:49:50 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:56.919 08:49:50 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:56.919 08:49:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.919 08:49:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.919 08:49:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.919 08:49:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.919 08:49:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.919 08:49:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.919 08:49:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.919 08:49:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.919 08:49:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.919 08:49:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.919 08:49:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.919 08:49:50 event -- scripts/common.sh@344 -- # case "$op" in 00:05:56.919 08:49:50 event -- scripts/common.sh@345 -- # : 1 00:05:56.919 08:49:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.919 08:49:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.919 08:49:50 event -- scripts/common.sh@365 -- # decimal 1 00:05:56.919 08:49:50 event -- scripts/common.sh@353 -- # local d=1 00:05:56.919 08:49:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.919 08:49:50 event -- scripts/common.sh@355 -- # echo 1 00:05:56.919 08:49:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.919 08:49:50 event -- scripts/common.sh@366 -- # decimal 2 00:05:56.919 08:49:50 event -- scripts/common.sh@353 -- # local d=2 00:05:56.919 08:49:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.919 08:49:50 event -- scripts/common.sh@355 -- # echo 2 00:05:56.919 08:49:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.919 08:49:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.919 08:49:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.919 08:49:50 event -- scripts/common.sh@368 -- # return 0 00:05:56.919 08:49:50 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.919 08:49:50 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:56.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.919 --rc genhtml_branch_coverage=1 00:05:56.919 --rc genhtml_function_coverage=1 00:05:56.919 --rc genhtml_legend=1 00:05:56.919 --rc geninfo_all_blocks=1 00:05:56.919 --rc geninfo_unexecuted_blocks=1 00:05:56.919 00:05:56.919 ' 00:05:56.919 08:49:50 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:56.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.919 --rc genhtml_branch_coverage=1 00:05:56.919 --rc genhtml_function_coverage=1 00:05:56.919 --rc genhtml_legend=1 00:05:56.919 --rc geninfo_all_blocks=1 00:05:56.919 --rc geninfo_unexecuted_blocks=1 00:05:56.919 00:05:56.919 ' 00:05:56.919 08:49:50 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:56.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.919 --rc genhtml_branch_coverage=1 00:05:56.919 --rc genhtml_function_coverage=1 00:05:56.919 --rc genhtml_legend=1 00:05:56.919 --rc geninfo_all_blocks=1 00:05:56.919 --rc geninfo_unexecuted_blocks=1 00:05:56.919 00:05:56.919 ' 00:05:56.919 08:49:50 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:56.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.919 --rc genhtml_branch_coverage=1 00:05:56.919 --rc genhtml_function_coverage=1 00:05:56.919 --rc genhtml_legend=1 00:05:56.919 --rc geninfo_all_blocks=1 00:05:56.919 --rc geninfo_unexecuted_blocks=1 00:05:56.919 00:05:56.919 ' 00:05:56.919 08:49:50 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:56.919 08:49:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:56.919 08:49:50 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:56.919 08:49:50 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:56.919 08:49:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.919 08:49:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.919 ************************************ 00:05:56.919 START TEST event_perf 00:05:56.919 ************************************ 00:05:56.919 08:49:50 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:56.919 Running I/O for 1 seconds...[2024-11-28 08:49:50.960004] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:56.919 [2024-11-28 08:49:50.960186] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70882 ] 00:05:57.178 [2024-11-28 08:49:51.105971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.178 [2024-11-28 08:49:51.138870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.178 [2024-11-28 08:49:51.139085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.178 Running I/O for 1 seconds...[2024-11-28 08:49:51.139263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.178 [2024-11-28 08:49:51.139362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.112 00:05:58.112 lcore 0: 156467 00:05:58.112 lcore 1: 156469 00:05:58.112 lcore 2: 156467 00:05:58.112 lcore 3: 156469 00:05:58.112 done. 00:05:58.112 00:05:58.112 real 0m1.267s 00:05:58.112 user 0m4.055s 00:05:58.112 sys 0m0.082s 00:05:58.112 08:49:52 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.112 ************************************ 00:05:58.112 08:49:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.112 END TEST event_perf 00:05:58.112 ************************************ 00:05:58.112 08:49:52 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:58.112 08:49:52 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:58.112 08:49:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.112 08:49:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.371 ************************************ 00:05:58.371 START TEST event_reactor 00:05:58.371 ************************************ 00:05:58.371 08:49:52 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:58.371 [2024-11-28 08:49:52.262351] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:58.371 [2024-11-28 08:49:52.262470] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70917 ] 00:05:58.371 [2024-11-28 08:49:52.409301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.371 [2024-11-28 08:49:52.440340] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.749 test_start 00:05:59.749 oneshot 00:05:59.749 tick 100 00:05:59.749 tick 100 00:05:59.749 tick 250 00:05:59.749 tick 100 00:05:59.749 tick 100 00:05:59.749 tick 250 00:05:59.749 tick 100 00:05:59.749 tick 500 00:05:59.749 tick 100 00:05:59.749 tick 100 00:05:59.749 tick 250 00:05:59.749 tick 100 00:05:59.749 tick 100 00:05:59.749 test_end 00:05:59.749 00:05:59.749 real 0m1.254s 00:05:59.749 user 0m1.076s 00:05:59.749 sys 0m0.072s 00:05:59.749 08:49:53 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.749 08:49:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:59.749 ************************************ 00:05:59.749 END TEST event_reactor 00:05:59.749 ************************************ 00:05:59.749 08:49:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.749 08:49:53 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:59.749 08:49:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.749 08:49:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.749 ************************************ 00:05:59.749 START TEST event_reactor_perf 00:05:59.749 ************************************ 00:05:59.749 08:49:53 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.749 [2024-11-28 08:49:53.558722] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:59.749 [2024-11-28 08:49:53.558921] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70954 ] 00:05:59.749 [2024-11-28 08:49:53.705828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.749 [2024-11-28 08:49:53.733879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.688 test_start 00:06:00.688 test_end 00:06:00.688 Performance: 422085 events per second 00:06:00.688 ************************************ 00:06:00.688 END TEST event_reactor_perf 00:06:00.688 ************************************ 00:06:00.688 00:06:00.688 real 0m1.252s 00:06:00.688 user 0m1.078s 00:06:00.688 sys 0m0.068s 00:06:00.688 08:49:54 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.688 08:49:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.950 08:49:54 event -- event/event.sh@49 -- # uname -s 00:06:00.950 08:49:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:00.950 08:49:54 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:00.950 08:49:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.950 08:49:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.950 08:49:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.950 ************************************ 00:06:00.950 START TEST event_scheduler 00:06:00.950 ************************************ 00:06:00.950 08:49:54 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:00.950 * Looking for test storage... 00:06:00.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:00.950 08:49:54 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:00.950 08:49:54 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:00.950 08:49:54 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:00.950 08:49:54 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:00.950 08:49:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.950 08:49:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:00.950 08:49:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.950 08:49:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.950 08:49:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.950 08:49:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:00.950 08:49:55 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.950 08:49:55 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:00.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.950 --rc genhtml_branch_coverage=1 00:06:00.950 --rc genhtml_function_coverage=1 00:06:00.950 --rc genhtml_legend=1 00:06:00.950 --rc geninfo_all_blocks=1 00:06:00.950 --rc geninfo_unexecuted_blocks=1 00:06:00.950 00:06:00.950 ' 00:06:00.950 08:49:55 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:00.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.950 --rc genhtml_branch_coverage=1 00:06:00.950 --rc genhtml_function_coverage=1 00:06:00.950 --rc genhtml_legend=1 00:06:00.950 --rc geninfo_all_blocks=1 00:06:00.950 --rc geninfo_unexecuted_blocks=1 00:06:00.950 00:06:00.950 ' 00:06:00.950 08:49:55 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:00.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.950 --rc genhtml_branch_coverage=1 00:06:00.950 --rc genhtml_function_coverage=1 00:06:00.950 --rc genhtml_legend=1 00:06:00.950 --rc geninfo_all_blocks=1 00:06:00.950 --rc geninfo_unexecuted_blocks=1 00:06:00.950 00:06:00.950 ' 00:06:00.950 08:49:55 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:00.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.950 --rc genhtml_branch_coverage=1 00:06:00.950 --rc genhtml_function_coverage=1 00:06:00.950 --rc genhtml_legend=1 00:06:00.950 --rc geninfo_all_blocks=1 00:06:00.950 --rc geninfo_unexecuted_blocks=1 00:06:00.950 00:06:00.950 ' 00:06:00.950 08:49:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:00.950 08:49:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71019 00:06:00.950 08:49:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.950 08:49:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71019 00:06:00.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.950 08:49:55 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 71019 ']' 00:06:00.950 08:49:55 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.950 08:49:55 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.950 08:49:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:00.950 08:49:55 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.950 08:49:55 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.950 08:49:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.210 [2024-11-28 08:49:55.085011] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:01.210 [2024-11-28 08:49:55.085935] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71019 ] 00:06:01.210 [2024-11-28 08:49:55.239257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.210 [2024-11-28 08:49:55.284128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.210 [2024-11-28 08:49:55.284456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.210 [2024-11-28 08:49:55.284752] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.210 [2024-11-28 08:49:55.284838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.145 08:49:55 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.145 08:49:55 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:02.145 08:49:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:02.145 08:49:55 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.145 08:49:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.145 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.145 POWER: Cannot set governor of lcore 0 to userspace 00:06:02.145 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.145 POWER: Cannot set governor of lcore 0 to performance 00:06:02.145 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.145 POWER: Cannot set governor of lcore 0 to userspace 00:06:02.145 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.145 POWER: Cannot set governor of lcore 0 to userspace 00:06:02.145 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:02.145 POWER: Unable to set Power Management Environment for lcore 0 00:06:02.145 [2024-11-28 08:49:55.918213] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:02.145 [2024-11-28 08:49:55.918238] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:02.145 [2024-11-28 08:49:55.918247] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:02.145 [2024-11-28 08:49:55.918263] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:02.145 [2024-11-28 08:49:55.918285] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:02.145 [2024-11-28 08:49:55.918306] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:02.145 08:49:55 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.145 08:49:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:02.145 08:49:55 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.145 08:49:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.145 [2024-11-28 08:49:55.987830] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:02.145 08:49:55 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.145 08:49:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:02.145 08:49:55 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.145 08:49:55 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.145 08:49:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.145 ************************************ 00:06:02.145 START TEST scheduler_create_thread 00:06:02.145 ************************************ 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.145 2 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.145 3 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.145 4 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.145 5 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.145 6 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.145 7 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.145 8 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.145 9 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.145 10 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.145 08:49:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.519 08:49:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.519 08:49:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:03.519 08:49:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:03.519 08:49:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.519 08:49:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.894 ************************************ 00:06:04.894 END TEST scheduler_create_thread 00:06:04.894 ************************************ 00:06:04.894 08:49:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.894 00:06:04.894 real 0m2.610s 00:06:04.894 user 0m0.014s 00:06:04.894 sys 0m0.004s 00:06:04.894 08:49:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.894 08:49:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.894 08:49:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:04.895 08:49:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71019 00:06:04.895 08:49:58 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 71019 ']' 00:06:04.895 08:49:58 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 71019 00:06:04.895 08:49:58 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:04.895 08:49:58 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.895 08:49:58 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71019 00:06:04.895 killing process with pid 71019 00:06:04.895 08:49:58 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:04.895 08:49:58 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:04.895 08:49:58 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71019' 00:06:04.895 08:49:58 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 71019 00:06:04.895 08:49:58 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 71019 00:06:05.152 [2024-11-28 08:49:59.091114] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:05.411 00:06:05.411 real 0m4.430s 00:06:05.411 user 0m7.973s 00:06:05.411 sys 0m0.375s 00:06:05.411 08:49:59 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.411 ************************************ 00:06:05.411 END TEST event_scheduler 00:06:05.411 ************************************ 00:06:05.411 08:49:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.411 08:49:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:05.411 08:49:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:05.411 08:49:59 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.411 08:49:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.411 08:49:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.411 ************************************ 00:06:05.411 START TEST app_repeat 00:06:05.411 ************************************ 00:06:05.411 08:49:59 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:05.411 08:49:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.411 08:49:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.411 08:49:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:05.411 08:49:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.411 08:49:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:05.411 08:49:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:05.411 08:49:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:05.411 Process app_repeat pid: 71119 00:06:05.411 spdk_app_start Round 0 00:06:05.411 08:49:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71119 00:06:05.411 08:49:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.411 08:49:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71119' 00:06:05.411 08:49:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.411 08:49:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:05.411 08:49:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71119 /var/tmp/spdk-nbd.sock 00:06:05.411 08:49:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71119 ']' 00:06:05.411 08:49:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.411 08:49:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.411 08:49:59 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:05.411 08:49:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.411 08:49:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.411 08:49:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.411 [2024-11-28 08:49:59.368651] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:05.411 [2024-11-28 08:49:59.368762] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71119 ] 00:06:05.411 [2024-11-28 08:49:59.513857] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.715 [2024-11-28 08:49:59.551104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.715 [2024-11-28 08:49:59.551272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.296 08:50:00 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.296 08:50:00 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:06.296 08:50:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.296 Malloc0 00:06:06.296 08:50:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.555 Malloc1 00:06:06.555 08:50:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.555 08:50:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.814 /dev/nbd0 00:06:06.814 08:50:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.814 08:50:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.814 08:50:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:06.814 08:50:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:06.814 08:50:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:06.814 08:50:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:06.814 08:50:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:06.814 08:50:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:06.814 08:50:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:06.814 08:50:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:06.814 08:50:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.814 1+0 records in 00:06:06.814 1+0 records out 00:06:06.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299386 s, 13.7 MB/s 00:06:06.814 08:50:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.814 08:50:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:06.814 08:50:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.814 08:50:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:06.814 08:50:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:06.814 08:50:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.814 08:50:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.814 08:50:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.072 /dev/nbd1 00:06:07.072 08:50:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.072 08:50:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.072 08:50:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:07.072 08:50:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:07.072 08:50:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:07.072 08:50:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:07.072 08:50:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:07.072 08:50:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:07.072 08:50:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:07.072 08:50:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:07.072 08:50:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.072 1+0 records in 00:06:07.072 1+0 records out 00:06:07.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226294 s, 18.1 MB/s 00:06:07.072 08:50:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.072 08:50:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:07.072 08:50:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.072 08:50:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:07.072 08:50:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:07.072 08:50:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.072 08:50:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.072 08:50:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.072 08:50:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.072 08:50:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.330 { 00:06:07.330 "nbd_device": "/dev/nbd0", 00:06:07.330 "bdev_name": "Malloc0" 00:06:07.330 }, 00:06:07.330 { 00:06:07.330 "nbd_device": "/dev/nbd1", 00:06:07.330 "bdev_name": "Malloc1" 00:06:07.330 } 00:06:07.330 ]' 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.330 { 00:06:07.330 "nbd_device": "/dev/nbd0", 00:06:07.330 "bdev_name": "Malloc0" 00:06:07.330 }, 00:06:07.330 { 00:06:07.330 "nbd_device": "/dev/nbd1", 00:06:07.330 "bdev_name": "Malloc1" 00:06:07.330 } 00:06:07.330 ]' 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.330 /dev/nbd1' 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.330 /dev/nbd1' 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.330 256+0 records in 00:06:07.330 256+0 records out 00:06:07.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414347 s, 253 MB/s 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.330 256+0 records in 00:06:07.330 256+0 records out 00:06:07.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131004 s, 80.0 MB/s 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.330 256+0 records in 00:06:07.330 256+0 records out 00:06:07.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156798 s, 66.9 MB/s 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.330 08:50:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.331 08:50:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.331 08:50:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.331 08:50:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.589 08:50:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.589 08:50:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.589 08:50:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.589 08:50:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.589 08:50:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.589 08:50:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.589 08:50:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.589 08:50:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.589 08:50:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.589 08:50:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.846 08:50:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.846 08:50:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.846 08:50:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.846 08:50:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.846 08:50:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.846 08:50:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.846 08:50:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.846 08:50:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.846 08:50:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.846 08:50:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.846 08:50:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.104 08:50:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.104 08:50:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.104 08:50:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.104 08:50:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.104 08:50:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.104 08:50:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.104 08:50:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.104 08:50:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.104 08:50:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.104 08:50:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.104 08:50:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.104 08:50:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.104 08:50:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.362 08:50:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.362 [2024-11-28 08:50:02.350374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.362 [2024-11-28 08:50:02.386237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.362 [2024-11-28 08:50:02.386350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.362 [2024-11-28 08:50:02.415363] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.362 [2024-11-28 08:50:02.415410] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.643 spdk_app_start Round 1 00:06:11.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.643 08:50:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.643 08:50:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:11.643 08:50:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71119 /var/tmp/spdk-nbd.sock 00:06:11.643 08:50:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71119 ']' 00:06:11.643 08:50:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.643 08:50:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.643 08:50:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.643 08:50:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.643 08:50:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.643 08:50:05 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.643 08:50:05 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:11.643 08:50:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.643 Malloc0 00:06:11.643 08:50:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.901 Malloc1 00:06:11.901 08:50:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.901 08:50:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.161 /dev/nbd0 00:06:12.161 08:50:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.161 08:50:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.161 08:50:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:12.161 08:50:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:12.161 08:50:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:12.161 08:50:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:12.161 08:50:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:12.161 08:50:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:12.161 08:50:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:12.161 08:50:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:12.161 08:50:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.161 1+0 records in 00:06:12.161 1+0 records out 00:06:12.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157225 s, 26.1 MB/s 00:06:12.161 08:50:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.161 08:50:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:12.161 08:50:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.161 08:50:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:12.161 08:50:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:12.161 08:50:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.161 08:50:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.161 08:50:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.419 /dev/nbd1 00:06:12.419 08:50:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.419 08:50:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.419 08:50:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:12.419 08:50:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:12.419 08:50:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:12.419 08:50:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:12.419 08:50:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:12.419 08:50:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:12.419 08:50:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:12.419 08:50:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:12.419 08:50:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.419 1+0 records in 00:06:12.419 1+0 records out 00:06:12.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279603 s, 14.6 MB/s 00:06:12.419 08:50:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.419 08:50:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:12.419 08:50:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.419 08:50:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:12.419 08:50:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:12.419 08:50:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.419 08:50:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.419 08:50:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.419 08:50:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.419 08:50:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.419 08:50:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.419 { 00:06:12.419 "nbd_device": "/dev/nbd0", 00:06:12.419 "bdev_name": "Malloc0" 00:06:12.419 }, 00:06:12.419 { 00:06:12.419 "nbd_device": "/dev/nbd1", 00:06:12.419 "bdev_name": "Malloc1" 00:06:12.419 } 00:06:12.419 ]' 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.678 { 00:06:12.678 "nbd_device": "/dev/nbd0", 00:06:12.678 "bdev_name": "Malloc0" 00:06:12.678 }, 00:06:12.678 { 00:06:12.678 "nbd_device": "/dev/nbd1", 00:06:12.678 "bdev_name": "Malloc1" 00:06:12.678 } 00:06:12.678 ]' 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.678 /dev/nbd1' 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.678 /dev/nbd1' 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.678 256+0 records in 00:06:12.678 256+0 records out 00:06:12.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00695316 s, 151 MB/s 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.678 256+0 records in 00:06:12.678 256+0 records out 00:06:12.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135377 s, 77.5 MB/s 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.678 256+0 records in 00:06:12.678 256+0 records out 00:06:12.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142764 s, 73.4 MB/s 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.678 08:50:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.937 08:50:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.937 08:50:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.937 08:50:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.937 08:50:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.937 08:50:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.937 08:50:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.937 08:50:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.937 08:50:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.937 08:50:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.937 08:50:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.196 08:50:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.196 08:50:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.196 08:50:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.196 08:50:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.196 08:50:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.196 08:50:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.196 08:50:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.196 08:50:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.196 08:50:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.196 08:50:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.196 08:50:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.196 08:50:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.454 08:50:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.454 08:50:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.454 08:50:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.454 08:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.454 08:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.454 08:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.454 08:50:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.454 08:50:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.454 08:50:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.454 08:50:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.454 08:50:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.454 08:50:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.454 08:50:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:13.713 [2024-11-28 08:50:07.636472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.713 [2024-11-28 08:50:07.671398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.713 [2024-11-28 08:50:07.671401] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.713 [2024-11-28 08:50:07.700569] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.713 [2024-11-28 08:50:07.700604] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.006 spdk_app_start Round 2 00:06:17.006 08:50:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.006 08:50:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:17.006 08:50:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71119 /var/tmp/spdk-nbd.sock 00:06:17.006 08:50:10 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71119 ']' 00:06:17.006 08:50:10 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.006 08:50:10 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.006 08:50:10 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.006 08:50:10 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.006 08:50:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.006 08:50:10 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.006 08:50:10 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:17.006 08:50:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.006 Malloc0 00:06:17.006 08:50:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.267 Malloc1 00:06:17.267 08:50:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.267 08:50:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.528 /dev/nbd0 00:06:17.528 08:50:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.529 08:50:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.529 1+0 records in 00:06:17.529 1+0 records out 00:06:17.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395641 s, 10.4 MB/s 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:17.529 08:50:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.529 08:50:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.529 08:50:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.529 /dev/nbd1 00:06:17.529 08:50:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.529 08:50:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.529 1+0 records in 00:06:17.529 1+0 records out 00:06:17.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251455 s, 16.3 MB/s 00:06:17.529 08:50:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.790 08:50:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:17.790 08:50:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.790 08:50:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:17.790 08:50:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:17.790 08:50:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.790 08:50:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.790 08:50:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.790 08:50:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.790 08:50:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.790 08:50:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.790 { 00:06:17.790 "nbd_device": "/dev/nbd0", 00:06:17.790 "bdev_name": "Malloc0" 00:06:17.790 }, 00:06:17.790 { 00:06:17.790 "nbd_device": "/dev/nbd1", 00:06:17.790 "bdev_name": "Malloc1" 00:06:17.790 } 00:06:17.790 ]' 00:06:18.050 08:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.050 { 00:06:18.050 "nbd_device": "/dev/nbd0", 00:06:18.050 "bdev_name": "Malloc0" 00:06:18.050 }, 00:06:18.050 { 00:06:18.050 "nbd_device": "/dev/nbd1", 00:06:18.050 "bdev_name": "Malloc1" 00:06:18.050 } 00:06:18.050 ]' 00:06:18.050 08:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.050 08:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.050 /dev/nbd1' 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.051 /dev/nbd1' 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.051 256+0 records in 00:06:18.051 256+0 records out 00:06:18.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00720795 s, 145 MB/s 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.051 256+0 records in 00:06:18.051 256+0 records out 00:06:18.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138592 s, 75.7 MB/s 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.051 256+0 records in 00:06:18.051 256+0 records out 00:06:18.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155744 s, 67.3 MB/s 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.051 08:50:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.051 08:50:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.051 08:50:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.051 08:50:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.051 08:50:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.051 08:50:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.051 08:50:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:18.051 08:50:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.051 08:50:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.312 08:50:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.573 08:50:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.573 08:50:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.573 08:50:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.573 08:50:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.573 08:50:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.573 08:50:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.573 08:50:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:18.573 08:50:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.573 08:50:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.573 08:50:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.573 08:50:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.573 08:50:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.573 08:50:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.834 08:50:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.834 [2024-11-28 08:50:12.951146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.095 [2024-11-28 08:50:12.987903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.095 [2024-11-28 08:50:12.987904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.095 [2024-11-28 08:50:13.017146] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.095 [2024-11-28 08:50:13.017187] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.393 08:50:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71119 /var/tmp/spdk-nbd.sock 00:06:22.393 08:50:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71119 ']' 00:06:22.393 08:50:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.393 08:50:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.393 08:50:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.393 08:50:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.393 08:50:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.393 08:50:16 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.393 08:50:16 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:22.393 08:50:16 event.app_repeat -- event/event.sh@39 -- # killprocess 71119 00:06:22.393 08:50:16 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71119 ']' 00:06:22.393 08:50:16 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71119 00:06:22.393 08:50:16 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:22.393 08:50:16 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.393 08:50:16 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71119 00:06:22.393 killing process with pid 71119 00:06:22.393 08:50:16 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.393 08:50:16 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.393 08:50:16 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71119' 00:06:22.393 08:50:16 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71119 00:06:22.393 08:50:16 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71119 00:06:22.393 spdk_app_start is called in Round 0. 00:06:22.393 Shutdown signal received, stop current app iteration 00:06:22.393 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:22.393 spdk_app_start is called in Round 1. 00:06:22.393 Shutdown signal received, stop current app iteration 00:06:22.393 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:22.393 spdk_app_start is called in Round 2. 00:06:22.393 Shutdown signal received, stop current app iteration 00:06:22.393 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:22.393 spdk_app_start is called in Round 3. 00:06:22.393 Shutdown signal received, stop current app iteration 00:06:22.393 08:50:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:22.393 08:50:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:22.393 00:06:22.393 real 0m16.939s 00:06:22.393 user 0m37.752s 00:06:22.393 sys 0m2.073s 00:06:22.393 ************************************ 00:06:22.393 END TEST app_repeat 00:06:22.393 ************************************ 00:06:22.393 08:50:16 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.393 08:50:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.393 08:50:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:22.393 08:50:16 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:22.393 08:50:16 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.393 08:50:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.393 08:50:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.393 ************************************ 00:06:22.393 START TEST cpu_locks 00:06:22.393 ************************************ 00:06:22.393 08:50:16 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:22.393 * Looking for test storage... 00:06:22.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:22.393 08:50:16 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:22.393 08:50:16 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:22.393 08:50:16 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:22.393 08:50:16 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.393 08:50:16 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:22.393 08:50:16 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.393 08:50:16 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:22.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.393 --rc genhtml_branch_coverage=1 00:06:22.393 --rc genhtml_function_coverage=1 00:06:22.393 --rc genhtml_legend=1 00:06:22.393 --rc geninfo_all_blocks=1 00:06:22.393 --rc geninfo_unexecuted_blocks=1 00:06:22.393 00:06:22.393 ' 00:06:22.393 08:50:16 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:22.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.393 --rc genhtml_branch_coverage=1 00:06:22.393 --rc genhtml_function_coverage=1 00:06:22.393 --rc genhtml_legend=1 00:06:22.393 --rc geninfo_all_blocks=1 00:06:22.393 --rc geninfo_unexecuted_blocks=1 00:06:22.393 00:06:22.393 ' 00:06:22.393 08:50:16 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:22.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.393 --rc genhtml_branch_coverage=1 00:06:22.393 --rc genhtml_function_coverage=1 00:06:22.393 --rc genhtml_legend=1 00:06:22.393 --rc geninfo_all_blocks=1 00:06:22.393 --rc geninfo_unexecuted_blocks=1 00:06:22.393 00:06:22.393 ' 00:06:22.393 08:50:16 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:22.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.393 --rc genhtml_branch_coverage=1 00:06:22.393 --rc genhtml_function_coverage=1 00:06:22.393 --rc genhtml_legend=1 00:06:22.393 --rc geninfo_all_blocks=1 00:06:22.393 --rc geninfo_unexecuted_blocks=1 00:06:22.393 00:06:22.393 ' 00:06:22.393 08:50:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:22.393 08:50:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:22.393 08:50:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:22.393 08:50:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:22.393 08:50:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.393 08:50:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.393 08:50:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.393 ************************************ 00:06:22.393 START TEST default_locks 00:06:22.393 ************************************ 00:06:22.393 08:50:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:22.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.393 08:50:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71539 00:06:22.393 08:50:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71539 00:06:22.393 08:50:16 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71539 ']' 00:06:22.393 08:50:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.393 08:50:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.393 08:50:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.393 08:50:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.393 08:50:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.393 08:50:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.652 [2024-11-28 08:50:16.533523] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:22.652 [2024-11-28 08:50:16.533733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71539 ] 00:06:22.652 [2024-11-28 08:50:16.674983] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.652 [2024-11-28 08:50:16.720423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.218 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.219 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:23.219 08:50:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71539 00:06:23.219 08:50:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71539 00:06:23.219 08:50:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.477 08:50:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71539 00:06:23.477 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71539 ']' 00:06:23.477 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71539 00:06:23.477 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:23.477 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.478 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71539 00:06:23.478 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.478 killing process with pid 71539 00:06:23.478 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.478 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71539' 00:06:23.478 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71539 00:06:23.478 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71539 00:06:23.736 08:50:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71539 00:06:23.736 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:23.736 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71539 00:06:23.736 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:23.736 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.736 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:23.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.993 ERROR: process (pid: 71539) is no longer running 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71539 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71539 ']' 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.993 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71539) - No such process 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.993 00:06:23.993 real 0m1.383s 00:06:23.993 user 0m1.334s 00:06:23.993 sys 0m0.460s 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.993 08:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.993 ************************************ 00:06:23.993 END TEST default_locks 00:06:23.993 ************************************ 00:06:23.993 08:50:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:23.993 08:50:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.993 08:50:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.993 08:50:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.993 ************************************ 00:06:23.993 START TEST default_locks_via_rpc 00:06:23.993 ************************************ 00:06:23.993 08:50:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:23.993 08:50:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71587 00:06:23.993 08:50:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71587 00:06:23.993 08:50:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71587 ']' 00:06:23.993 08:50:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.993 08:50:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.993 08:50:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.993 08:50:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.993 08:50:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.993 08:50:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.993 [2024-11-28 08:50:17.977350] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:23.993 [2024-11-28 08:50:17.977576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71587 ] 00:06:24.250 [2024-11-28 08:50:18.128092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.250 [2024-11-28 08:50:18.161486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71587 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71587 00:06:24.817 08:50:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.078 08:50:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71587 00:06:25.078 08:50:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71587 ']' 00:06:25.078 08:50:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71587 00:06:25.078 08:50:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:25.078 08:50:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.078 08:50:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71587 00:06:25.078 killing process with pid 71587 00:06:25.078 08:50:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.078 08:50:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.078 08:50:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71587' 00:06:25.078 08:50:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71587 00:06:25.078 08:50:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71587 00:06:25.338 ************************************ 00:06:25.338 END TEST default_locks_via_rpc 00:06:25.338 ************************************ 00:06:25.338 00:06:25.338 real 0m1.406s 00:06:25.338 user 0m1.403s 00:06:25.338 sys 0m0.429s 00:06:25.338 08:50:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.338 08:50:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.338 08:50:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:25.338 08:50:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.338 08:50:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.338 08:50:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.338 ************************************ 00:06:25.338 START TEST non_locking_app_on_locked_coremask 00:06:25.338 ************************************ 00:06:25.338 08:50:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:25.338 08:50:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71633 00:06:25.338 08:50:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71633 /var/tmp/spdk.sock 00:06:25.338 08:50:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71633 ']' 00:06:25.338 08:50:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.338 08:50:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.338 08:50:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.338 08:50:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.338 08:50:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.338 08:50:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.338 [2024-11-28 08:50:19.429070] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:25.338 [2024-11-28 08:50:19.429190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71633 ] 00:06:25.628 [2024-11-28 08:50:19.578050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.628 [2024-11-28 08:50:19.611353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.233 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.233 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:26.233 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71649 00:06:26.233 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:26.233 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71649 /var/tmp/spdk2.sock 00:06:26.233 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71649 ']' 00:06:26.233 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.233 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.233 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.233 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.233 08:50:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.233 [2024-11-28 08:50:20.338151] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:26.233 [2024-11-28 08:50:20.338441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71649 ] 00:06:26.490 [2024-11-28 08:50:20.492681] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.490 [2024-11-28 08:50:20.492729] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.490 [2024-11-28 08:50:20.557669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71633 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71633 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71633 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71633 ']' 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71633 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71633 00:06:27.424 killing process with pid 71633 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71633' 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71633 00:06:27.424 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71633 00:06:27.991 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71649 00:06:27.991 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71649 ']' 00:06:27.991 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71649 00:06:27.991 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.991 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.991 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71649 00:06:27.991 killing process with pid 71649 00:06:27.991 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.991 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.991 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71649' 00:06:27.991 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71649 00:06:27.991 08:50:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71649 00:06:28.249 ************************************ 00:06:28.249 END TEST non_locking_app_on_locked_coremask 00:06:28.249 ************************************ 00:06:28.249 00:06:28.249 real 0m2.823s 00:06:28.249 user 0m3.094s 00:06:28.249 sys 0m0.773s 00:06:28.249 08:50:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.249 08:50:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.249 08:50:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:28.249 08:50:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.249 08:50:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.249 08:50:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.249 ************************************ 00:06:28.249 START TEST locking_app_on_unlocked_coremask 00:06:28.249 ************************************ 00:06:28.249 08:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:28.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.249 08:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71707 00:06:28.249 08:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71707 /var/tmp/spdk.sock 00:06:28.249 08:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71707 ']' 00:06:28.249 08:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.249 08:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.249 08:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.249 08:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.249 08:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.249 08:50:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:28.249 [2024-11-28 08:50:22.284957] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:28.249 [2024-11-28 08:50:22.285050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71707 ] 00:06:28.506 [2024-11-28 08:50:22.415409] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.506 [2024-11-28 08:50:22.415448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.506 [2024-11-28 08:50:22.444502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.070 08:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.070 08:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:29.070 08:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71723 00:06:29.070 08:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71723 /var/tmp/spdk2.sock 00:06:29.070 08:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71723 ']' 00:06:29.070 08:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.070 08:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.070 08:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.070 08:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.070 08:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.070 08:50:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.328 [2024-11-28 08:50:23.224639] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:29.328 [2024-11-28 08:50:23.224917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71723 ] 00:06:29.328 [2024-11-28 08:50:23.370898] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.328 [2024-11-28 08:50:23.434640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.266 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.266 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:30.266 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71723 00:06:30.266 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71723 00:06:30.266 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.524 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71707 00:06:30.524 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71707 ']' 00:06:30.524 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71707 00:06:30.524 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:30.524 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.524 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71707 00:06:30.524 killing process with pid 71707 00:06:30.524 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.524 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.524 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71707' 00:06:30.524 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71707 00:06:30.524 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71707 00:06:31.092 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71723 00:06:31.092 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71723 ']' 00:06:31.092 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71723 00:06:31.092 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:31.092 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.092 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71723 00:06:31.092 killing process with pid 71723 00:06:31.092 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.092 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.092 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71723' 00:06:31.092 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71723 00:06:31.092 08:50:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71723 00:06:31.092 00:06:31.092 real 0m2.951s 00:06:31.092 user 0m3.305s 00:06:31.092 sys 0m0.772s 00:06:31.092 08:50:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.092 ************************************ 00:06:31.092 END TEST locking_app_on_unlocked_coremask 00:06:31.092 ************************************ 00:06:31.092 08:50:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.349 08:50:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:31.349 08:50:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.349 08:50:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.349 08:50:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.349 ************************************ 00:06:31.349 START TEST locking_app_on_locked_coremask 00:06:31.349 ************************************ 00:06:31.349 08:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:31.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.349 08:50:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71781 00:06:31.349 08:50:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71781 /var/tmp/spdk.sock 00:06:31.349 08:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71781 ']' 00:06:31.349 08:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.350 08:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.350 08:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.350 08:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.350 08:50:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.350 08:50:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.350 [2024-11-28 08:50:25.292228] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:31.350 [2024-11-28 08:50:25.292347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71781 ] 00:06:31.350 [2024-11-28 08:50:25.435768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.350 [2024-11-28 08:50:25.464910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.285 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.285 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:32.285 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71797 00:06:32.285 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71797 /var/tmp/spdk2.sock 00:06:32.286 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:32.286 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:32.286 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71797 /var/tmp/spdk2.sock 00:06:32.286 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:32.286 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.286 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:32.286 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.286 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71797 /var/tmp/spdk2.sock 00:06:32.286 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71797 ']' 00:06:32.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.286 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.286 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.286 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.286 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.286 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.286 [2024-11-28 08:50:26.180384] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:32.286 [2024-11-28 08:50:26.180680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71797 ] 00:06:32.286 [2024-11-28 08:50:26.325583] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71781 has claimed it. 00:06:32.286 [2024-11-28 08:50:26.325626] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:32.856 ERROR: process (pid: 71797) is no longer running 00:06:32.856 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71797) - No such process 00:06:32.856 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.856 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:32.856 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:32.856 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.856 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.857 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.857 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71781 00:06:32.857 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71781 00:06:32.857 08:50:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.116 08:50:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71781 00:06:33.116 08:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71781 ']' 00:06:33.116 08:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71781 00:06:33.116 08:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:33.116 08:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.116 08:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71781 00:06:33.116 killing process with pid 71781 00:06:33.116 08:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.116 08:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.116 08:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71781' 00:06:33.116 08:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71781 00:06:33.116 08:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71781 00:06:33.375 ************************************ 00:06:33.375 END TEST locking_app_on_locked_coremask 00:06:33.375 ************************************ 00:06:33.375 00:06:33.375 real 0m2.099s 00:06:33.375 user 0m2.352s 00:06:33.375 sys 0m0.517s 00:06:33.375 08:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.375 08:50:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.375 08:50:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:33.375 08:50:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.375 08:50:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.375 08:50:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.375 ************************************ 00:06:33.375 START TEST locking_overlapped_coremask 00:06:33.375 ************************************ 00:06:33.375 08:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:33.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.375 08:50:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71839 00:06:33.375 08:50:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71839 /var/tmp/spdk.sock 00:06:33.375 08:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71839 ']' 00:06:33.375 08:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.376 08:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.376 08:50:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:33.376 08:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.376 08:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.376 08:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.376 [2024-11-28 08:50:27.422885] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:33.376 [2024-11-28 08:50:27.423103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71839 ] 00:06:33.636 [2024-11-28 08:50:27.566680] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.636 [2024-11-28 08:50:27.597112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.636 [2024-11-28 08:50:27.597270] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.637 [2024-11-28 08:50:27.597334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71857 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71857 /var/tmp/spdk2.sock 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71857 /var/tmp/spdk2.sock 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:34.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71857 /var/tmp/spdk2.sock 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71857 ']' 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.207 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.465 [2024-11-28 08:50:28.325791] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:34.465 [2024-11-28 08:50:28.325929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71857 ] 00:06:34.465 [2024-11-28 08:50:28.480092] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71839 has claimed it. 00:06:34.465 [2024-11-28 08:50:28.480153] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:35.036 ERROR: process (pid: 71857) is no longer running 00:06:35.036 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71857) - No such process 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71839 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71839 ']' 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71839 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71839 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71839' 00:06:35.036 killing process with pid 71839 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71839 00:06:35.036 08:50:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71839 00:06:35.297 00:06:35.297 real 0m1.885s 00:06:35.297 user 0m5.215s 00:06:35.297 sys 0m0.379s 00:06:35.297 08:50:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.297 08:50:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.297 ************************************ 00:06:35.297 END TEST locking_overlapped_coremask 00:06:35.297 ************************************ 00:06:35.297 08:50:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:35.297 08:50:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.297 08:50:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.297 08:50:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.297 ************************************ 00:06:35.297 START TEST locking_overlapped_coremask_via_rpc 00:06:35.297 ************************************ 00:06:35.297 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:35.297 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71899 00:06:35.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.297 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71899 /var/tmp/spdk.sock 00:06:35.297 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71899 ']' 00:06:35.297 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.297 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.297 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.297 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.297 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.297 08:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:35.297 [2024-11-28 08:50:29.366649] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:35.297 [2024-11-28 08:50:29.366767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71899 ] 00:06:35.571 [2024-11-28 08:50:29.511499] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.571 [2024-11-28 08:50:29.511538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.571 [2024-11-28 08:50:29.542853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.571 [2024-11-28 08:50:29.542971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.571 [2024-11-28 08:50:29.543090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.139 08:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.139 08:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:36.139 08:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71917 00:06:36.139 08:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71917 /var/tmp/spdk2.sock 00:06:36.139 08:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:36.139 08:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71917 ']' 00:06:36.139 08:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.139 08:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.139 08:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.139 08:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.139 08:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.399 [2024-11-28 08:50:30.268171] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:36.399 [2024-11-28 08:50:30.268453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71917 ] 00:06:36.399 [2024-11-28 08:50:30.423125] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.399 [2024-11-28 08:50:30.423182] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.399 [2024-11-28 08:50:30.511476] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.399 [2024-11-28 08:50:30.511541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.399 [2024-11-28 08:50:30.511611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:37.330 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.330 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.330 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:37.330 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.330 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.330 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.330 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.330 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:37.330 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.330 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:37.330 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.331 [2024-11-28 08:50:31.119960] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71899 has claimed it. 00:06:37.331 request: 00:06:37.331 { 00:06:37.331 "method": "framework_enable_cpumask_locks", 00:06:37.331 "req_id": 1 00:06:37.331 } 00:06:37.331 Got JSON-RPC error response 00:06:37.331 response: 00:06:37.331 { 00:06:37.331 "code": -32603, 00:06:37.331 "message": "Failed to claim CPU core: 2" 00:06:37.331 } 00:06:37.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71899 /var/tmp/spdk.sock 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71899 ']' 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71917 /var/tmp/spdk2.sock 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71917 ']' 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.331 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.589 ************************************ 00:06:37.589 END TEST locking_overlapped_coremask_via_rpc 00:06:37.589 ************************************ 00:06:37.589 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.589 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.589 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:37.589 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:37.589 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:37.589 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:37.589 00:06:37.589 real 0m2.250s 00:06:37.589 user 0m1.054s 00:06:37.589 sys 0m0.130s 00:06:37.589 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.589 08:50:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.589 08:50:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:37.589 08:50:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71899 ]] 00:06:37.589 08:50:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71899 00:06:37.589 08:50:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71899 ']' 00:06:37.589 08:50:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71899 00:06:37.589 08:50:31 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:37.589 08:50:31 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.589 08:50:31 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71899 00:06:37.589 killing process with pid 71899 00:06:37.589 08:50:31 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.589 08:50:31 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.589 08:50:31 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71899' 00:06:37.589 08:50:31 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71899 00:06:37.589 08:50:31 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71899 00:06:37.846 08:50:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71917 ]] 00:06:37.846 08:50:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71917 00:06:37.846 08:50:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71917 ']' 00:06:37.846 08:50:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71917 00:06:37.846 08:50:31 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:37.846 08:50:31 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.846 08:50:31 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71917 00:06:37.846 killing process with pid 71917 00:06:37.846 08:50:31 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:37.846 08:50:31 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:37.846 08:50:31 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71917' 00:06:37.846 08:50:31 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71917 00:06:37.846 08:50:31 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71917 00:06:38.104 08:50:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:38.104 Process with pid 71899 is not found 00:06:38.104 08:50:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:38.104 08:50:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71899 ]] 00:06:38.104 08:50:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71899 00:06:38.104 08:50:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71899 ']' 00:06:38.104 08:50:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71899 00:06:38.104 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71899) - No such process 00:06:38.104 08:50:32 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71899 is not found' 00:06:38.104 08:50:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71917 ]] 00:06:38.104 08:50:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71917 00:06:38.104 Process with pid 71917 is not found 00:06:38.104 08:50:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71917 ']' 00:06:38.104 08:50:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71917 00:06:38.104 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71917) - No such process 00:06:38.104 08:50:32 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71917 is not found' 00:06:38.104 08:50:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:38.104 00:06:38.104 real 0m15.884s 00:06:38.104 user 0m28.022s 00:06:38.104 sys 0m4.218s 00:06:38.104 08:50:32 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.104 ************************************ 00:06:38.104 END TEST cpu_locks 00:06:38.104 ************************************ 00:06:38.104 08:50:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.361 ************************************ 00:06:38.361 END TEST event 00:06:38.361 ************************************ 00:06:38.361 00:06:38.361 real 0m41.462s 00:06:38.361 user 1m20.121s 00:06:38.361 sys 0m7.120s 00:06:38.361 08:50:32 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.361 08:50:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.361 08:50:32 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:38.361 08:50:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.361 08:50:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.361 08:50:32 -- common/autotest_common.sh@10 -- # set +x 00:06:38.361 ************************************ 00:06:38.361 START TEST thread 00:06:38.361 ************************************ 00:06:38.361 08:50:32 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:38.361 * Looking for test storage... 00:06:38.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:38.361 08:50:32 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.361 08:50:32 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.361 08:50:32 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.361 08:50:32 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.361 08:50:32 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.361 08:50:32 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.361 08:50:32 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.361 08:50:32 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.361 08:50:32 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.361 08:50:32 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.361 08:50:32 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.361 08:50:32 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.361 08:50:32 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.361 08:50:32 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.361 08:50:32 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.361 08:50:32 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:38.361 08:50:32 thread -- scripts/common.sh@345 -- # : 1 00:06:38.361 08:50:32 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.361 08:50:32 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.361 08:50:32 thread -- scripts/common.sh@365 -- # decimal 1 00:06:38.361 08:50:32 thread -- scripts/common.sh@353 -- # local d=1 00:06:38.361 08:50:32 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.361 08:50:32 thread -- scripts/common.sh@355 -- # echo 1 00:06:38.361 08:50:32 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.361 08:50:32 thread -- scripts/common.sh@366 -- # decimal 2 00:06:38.361 08:50:32 thread -- scripts/common.sh@353 -- # local d=2 00:06:38.361 08:50:32 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.361 08:50:32 thread -- scripts/common.sh@355 -- # echo 2 00:06:38.361 08:50:32 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.361 08:50:32 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.361 08:50:32 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.361 08:50:32 thread -- scripts/common.sh@368 -- # return 0 00:06:38.361 08:50:32 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.361 08:50:32 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.361 --rc genhtml_branch_coverage=1 00:06:38.361 --rc genhtml_function_coverage=1 00:06:38.361 --rc genhtml_legend=1 00:06:38.361 --rc geninfo_all_blocks=1 00:06:38.361 --rc geninfo_unexecuted_blocks=1 00:06:38.361 00:06:38.361 ' 00:06:38.361 08:50:32 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.361 --rc genhtml_branch_coverage=1 00:06:38.362 --rc genhtml_function_coverage=1 00:06:38.362 --rc genhtml_legend=1 00:06:38.362 --rc geninfo_all_blocks=1 00:06:38.362 --rc geninfo_unexecuted_blocks=1 00:06:38.362 00:06:38.362 ' 00:06:38.362 08:50:32 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.362 --rc genhtml_branch_coverage=1 00:06:38.362 --rc genhtml_function_coverage=1 00:06:38.362 --rc genhtml_legend=1 00:06:38.362 --rc geninfo_all_blocks=1 00:06:38.362 --rc geninfo_unexecuted_blocks=1 00:06:38.362 00:06:38.362 ' 00:06:38.362 08:50:32 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.362 --rc genhtml_branch_coverage=1 00:06:38.362 --rc genhtml_function_coverage=1 00:06:38.362 --rc genhtml_legend=1 00:06:38.362 --rc geninfo_all_blocks=1 00:06:38.362 --rc geninfo_unexecuted_blocks=1 00:06:38.362 00:06:38.362 ' 00:06:38.362 08:50:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:38.362 08:50:32 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:38.362 08:50:32 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.362 08:50:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.362 ************************************ 00:06:38.362 START TEST thread_poller_perf 00:06:38.362 ************************************ 00:06:38.362 08:50:32 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:38.362 [2024-11-28 08:50:32.439345] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:38.362 [2024-11-28 08:50:32.439550] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72044 ] 00:06:38.619 [2024-11-28 08:50:32.586217] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.619 [2024-11-28 08:50:32.614996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.619 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:39.558 [2024-11-28T08:50:33.678Z] ====================================== 00:06:39.558 [2024-11-28T08:50:33.678Z] busy:2610478828 (cyc) 00:06:39.558 [2024-11-28T08:50:33.678Z] total_run_count: 411000 00:06:39.558 [2024-11-28T08:50:33.678Z] tsc_hz: 2600000000 (cyc) 00:06:39.558 [2024-11-28T08:50:33.678Z] ====================================== 00:06:39.558 [2024-11-28T08:50:33.678Z] poller_cost: 6351 (cyc), 2442 (nsec) 00:06:39.558 00:06:39.558 real 0m1.254s 00:06:39.558 ************************************ 00:06:39.558 END TEST thread_poller_perf 00:06:39.558 ************************************ 00:06:39.558 user 0m1.095s 00:06:39.558 sys 0m0.053s 00:06:39.558 08:50:33 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.558 08:50:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.816 08:50:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:39.816 08:50:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:39.816 08:50:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.816 08:50:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.816 ************************************ 00:06:39.816 START TEST thread_poller_perf 00:06:39.816 ************************************ 00:06:39.816 08:50:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:39.816 [2024-11-28 08:50:33.752511] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:39.816 [2024-11-28 08:50:33.752642] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72075 ] 00:06:39.816 [2024-11-28 08:50:33.899363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.816 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:39.816 [2024-11-28 08:50:33.927656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.197 [2024-11-28T08:50:35.317Z] ====================================== 00:06:41.197 [2024-11-28T08:50:35.317Z] busy:2602399352 (cyc) 00:06:41.197 [2024-11-28T08:50:35.317Z] total_run_count: 5344000 00:06:41.197 [2024-11-28T08:50:35.317Z] tsc_hz: 2600000000 (cyc) 00:06:41.197 [2024-11-28T08:50:35.317Z] ====================================== 00:06:41.197 [2024-11-28T08:50:35.317Z] poller_cost: 486 (cyc), 186 (nsec) 00:06:41.197 00:06:41.197 real 0m1.259s 00:06:41.197 user 0m1.089s 00:06:41.197 sys 0m0.064s 00:06:41.197 08:50:34 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.197 ************************************ 00:06:41.197 END TEST thread_poller_perf 00:06:41.197 ************************************ 00:06:41.197 08:50:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.197 08:50:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:41.197 00:06:41.197 real 0m2.761s 00:06:41.197 user 0m2.289s 00:06:41.197 sys 0m0.239s 00:06:41.197 08:50:35 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.197 08:50:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.197 ************************************ 00:06:41.197 END TEST thread 00:06:41.197 ************************************ 00:06:41.197 08:50:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:41.197 08:50:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:41.197 08:50:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.197 08:50:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.197 08:50:35 -- common/autotest_common.sh@10 -- # set +x 00:06:41.197 ************************************ 00:06:41.197 START TEST app_cmdline 00:06:41.197 ************************************ 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:41.197 * Looking for test storage... 00:06:41.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.197 08:50:35 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.197 --rc genhtml_branch_coverage=1 00:06:41.197 --rc genhtml_function_coverage=1 00:06:41.197 --rc genhtml_legend=1 00:06:41.197 --rc geninfo_all_blocks=1 00:06:41.197 --rc geninfo_unexecuted_blocks=1 00:06:41.197 00:06:41.197 ' 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.197 --rc genhtml_branch_coverage=1 00:06:41.197 --rc genhtml_function_coverage=1 00:06:41.197 --rc genhtml_legend=1 00:06:41.197 --rc geninfo_all_blocks=1 00:06:41.197 --rc geninfo_unexecuted_blocks=1 00:06:41.197 00:06:41.197 ' 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.197 --rc genhtml_branch_coverage=1 00:06:41.197 --rc genhtml_function_coverage=1 00:06:41.197 --rc genhtml_legend=1 00:06:41.197 --rc geninfo_all_blocks=1 00:06:41.197 --rc geninfo_unexecuted_blocks=1 00:06:41.197 00:06:41.197 ' 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.197 --rc genhtml_branch_coverage=1 00:06:41.197 --rc genhtml_function_coverage=1 00:06:41.197 --rc genhtml_legend=1 00:06:41.197 --rc geninfo_all_blocks=1 00:06:41.197 --rc geninfo_unexecuted_blocks=1 00:06:41.197 00:06:41.197 ' 00:06:41.197 08:50:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:41.197 08:50:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72153 00:06:41.197 08:50:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72153 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72153 ']' 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.197 08:50:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.197 08:50:35 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:41.197 [2024-11-28 08:50:35.296815] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:41.197 [2024-11-28 08:50:35.296939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72153 ] 00:06:41.458 [2024-11-28 08:50:35.446355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.458 [2024-11-28 08:50:35.475878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.031 08:50:36 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.031 08:50:36 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:42.031 08:50:36 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:42.293 { 00:06:42.293 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:42.293 "fields": { 00:06:42.293 "major": 24, 00:06:42.293 "minor": 9, 00:06:42.293 "patch": 1, 00:06:42.293 "suffix": "-pre", 00:06:42.293 "commit": "b18e1bd62" 00:06:42.293 } 00:06:42.293 } 00:06:42.293 08:50:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:42.293 08:50:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:42.293 08:50:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:42.293 08:50:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:42.293 08:50:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:42.293 08:50:36 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.293 08:50:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.293 08:50:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:42.293 08:50:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:42.293 08:50:36 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.293 08:50:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:42.293 08:50:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:42.293 08:50:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.293 08:50:36 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:42.293 08:50:36 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.293 08:50:36 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:42.293 08:50:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.293 08:50:36 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:42.293 08:50:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.293 08:50:36 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:42.293 08:50:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.293 08:50:36 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:42.293 08:50:36 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:42.293 08:50:36 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.555 request: 00:06:42.555 { 00:06:42.555 "method": "env_dpdk_get_mem_stats", 00:06:42.555 "req_id": 1 00:06:42.555 } 00:06:42.555 Got JSON-RPC error response 00:06:42.555 response: 00:06:42.555 { 00:06:42.555 "code": -32601, 00:06:42.555 "message": "Method not found" 00:06:42.555 } 00:06:42.555 08:50:36 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:42.555 08:50:36 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.555 08:50:36 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.555 08:50:36 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.555 08:50:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72153 00:06:42.555 08:50:36 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72153 ']' 00:06:42.555 08:50:36 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72153 00:06:42.556 08:50:36 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:42.556 08:50:36 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.556 08:50:36 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72153 00:06:42.556 killing process with pid 72153 00:06:42.556 08:50:36 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.556 08:50:36 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.556 08:50:36 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72153' 00:06:42.556 08:50:36 app_cmdline -- common/autotest_common.sh@969 -- # kill 72153 00:06:42.556 08:50:36 app_cmdline -- common/autotest_common.sh@974 -- # wait 72153 00:06:42.815 00:06:42.815 real 0m1.647s 00:06:42.815 user 0m1.889s 00:06:42.815 sys 0m0.397s 00:06:42.815 08:50:36 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.815 ************************************ 00:06:42.815 END TEST app_cmdline 00:06:42.815 ************************************ 00:06:42.815 08:50:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.815 08:50:36 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:42.815 08:50:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.815 08:50:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.815 08:50:36 -- common/autotest_common.sh@10 -- # set +x 00:06:42.815 ************************************ 00:06:42.815 START TEST version 00:06:42.815 ************************************ 00:06:42.815 08:50:36 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:42.815 * Looking for test storage... 00:06:42.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:42.815 08:50:36 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:42.815 08:50:36 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:42.815 08:50:36 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:42.815 08:50:36 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:42.815 08:50:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.815 08:50:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.815 08:50:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.815 08:50:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.815 08:50:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.815 08:50:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.815 08:50:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.815 08:50:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.815 08:50:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.815 08:50:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.815 08:50:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.815 08:50:36 version -- scripts/common.sh@344 -- # case "$op" in 00:06:42.815 08:50:36 version -- scripts/common.sh@345 -- # : 1 00:06:42.815 08:50:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.815 08:50:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.815 08:50:36 version -- scripts/common.sh@365 -- # decimal 1 00:06:42.815 08:50:36 version -- scripts/common.sh@353 -- # local d=1 00:06:42.815 08:50:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.815 08:50:36 version -- scripts/common.sh@355 -- # echo 1 00:06:42.815 08:50:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.815 08:50:36 version -- scripts/common.sh@366 -- # decimal 2 00:06:42.815 08:50:36 version -- scripts/common.sh@353 -- # local d=2 00:06:42.815 08:50:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.815 08:50:36 version -- scripts/common.sh@355 -- # echo 2 00:06:42.815 08:50:36 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.815 08:50:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.815 08:50:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.815 08:50:36 version -- scripts/common.sh@368 -- # return 0 00:06:42.815 08:50:36 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.815 08:50:36 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:42.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.815 --rc genhtml_branch_coverage=1 00:06:42.815 --rc genhtml_function_coverage=1 00:06:42.815 --rc genhtml_legend=1 00:06:42.815 --rc geninfo_all_blocks=1 00:06:42.815 --rc geninfo_unexecuted_blocks=1 00:06:42.815 00:06:42.815 ' 00:06:42.815 08:50:36 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:42.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.815 --rc genhtml_branch_coverage=1 00:06:42.815 --rc genhtml_function_coverage=1 00:06:42.815 --rc genhtml_legend=1 00:06:42.815 --rc geninfo_all_blocks=1 00:06:42.815 --rc geninfo_unexecuted_blocks=1 00:06:42.815 00:06:42.815 ' 00:06:42.815 08:50:36 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:42.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.815 --rc genhtml_branch_coverage=1 00:06:42.815 --rc genhtml_function_coverage=1 00:06:42.815 --rc genhtml_legend=1 00:06:42.815 --rc geninfo_all_blocks=1 00:06:42.815 --rc geninfo_unexecuted_blocks=1 00:06:42.815 00:06:42.815 ' 00:06:42.815 08:50:36 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:42.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.815 --rc genhtml_branch_coverage=1 00:06:42.815 --rc genhtml_function_coverage=1 00:06:42.815 --rc genhtml_legend=1 00:06:42.815 --rc geninfo_all_blocks=1 00:06:42.815 --rc geninfo_unexecuted_blocks=1 00:06:42.815 00:06:42.815 ' 00:06:42.815 08:50:36 version -- app/version.sh@17 -- # get_header_version major 00:06:42.815 08:50:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:42.815 08:50:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.815 08:50:36 version -- app/version.sh@14 -- # cut -f2 00:06:42.815 08:50:36 version -- app/version.sh@17 -- # major=24 00:06:42.815 08:50:36 version -- app/version.sh@18 -- # get_header_version minor 00:06:42.815 08:50:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:42.815 08:50:36 version -- app/version.sh@14 -- # cut -f2 00:06:42.815 08:50:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.815 08:50:36 version -- app/version.sh@18 -- # minor=9 00:06:42.815 08:50:36 version -- app/version.sh@19 -- # get_header_version patch 00:06:42.815 08:50:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:42.815 08:50:36 version -- app/version.sh@14 -- # cut -f2 00:06:42.815 08:50:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.815 08:50:36 version -- app/version.sh@19 -- # patch=1 00:06:42.815 08:50:36 version -- app/version.sh@20 -- # get_header_version suffix 00:06:43.077 08:50:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:43.078 08:50:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:43.078 08:50:36 version -- app/version.sh@14 -- # cut -f2 00:06:43.078 08:50:36 version -- app/version.sh@20 -- # suffix=-pre 00:06:43.078 08:50:36 version -- app/version.sh@22 -- # version=24.9 00:06:43.078 08:50:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:43.078 08:50:36 version -- app/version.sh@25 -- # version=24.9.1 00:06:43.078 08:50:36 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:43.078 08:50:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:43.078 08:50:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:43.078 08:50:36 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:43.078 08:50:36 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:43.078 00:06:43.078 real 0m0.197s 00:06:43.078 user 0m0.125s 00:06:43.078 sys 0m0.100s 00:06:43.078 08:50:36 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.078 08:50:36 version -- common/autotest_common.sh@10 -- # set +x 00:06:43.078 ************************************ 00:06:43.078 END TEST version 00:06:43.078 ************************************ 00:06:43.078 08:50:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:43.078 08:50:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:43.078 08:50:37 -- spdk/autotest.sh@194 -- # uname -s 00:06:43.078 08:50:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:43.078 08:50:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:43.078 08:50:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:43.078 08:50:37 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:43.078 08:50:37 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:43.078 08:50:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:43.078 08:50:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.078 08:50:37 -- common/autotest_common.sh@10 -- # set +x 00:06:43.078 ************************************ 00:06:43.078 START TEST blockdev_nvme 00:06:43.078 ************************************ 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:43.078 * Looking for test storage... 00:06:43.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.078 08:50:37 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:43.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.078 --rc genhtml_branch_coverage=1 00:06:43.078 --rc genhtml_function_coverage=1 00:06:43.078 --rc genhtml_legend=1 00:06:43.078 --rc geninfo_all_blocks=1 00:06:43.078 --rc geninfo_unexecuted_blocks=1 00:06:43.078 00:06:43.078 ' 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:43.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.078 --rc genhtml_branch_coverage=1 00:06:43.078 --rc genhtml_function_coverage=1 00:06:43.078 --rc genhtml_legend=1 00:06:43.078 --rc geninfo_all_blocks=1 00:06:43.078 --rc geninfo_unexecuted_blocks=1 00:06:43.078 00:06:43.078 ' 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:43.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.078 --rc genhtml_branch_coverage=1 00:06:43.078 --rc genhtml_function_coverage=1 00:06:43.078 --rc genhtml_legend=1 00:06:43.078 --rc geninfo_all_blocks=1 00:06:43.078 --rc geninfo_unexecuted_blocks=1 00:06:43.078 00:06:43.078 ' 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:43.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.078 --rc genhtml_branch_coverage=1 00:06:43.078 --rc genhtml_function_coverage=1 00:06:43.078 --rc genhtml_legend=1 00:06:43.078 --rc geninfo_all_blocks=1 00:06:43.078 --rc geninfo_unexecuted_blocks=1 00:06:43.078 00:06:43.078 ' 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:43.078 08:50:37 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:43.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72314 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 72314 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 72314 ']' 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.078 08:50:37 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.078 08:50:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.337 [2024-11-28 08:50:37.241640] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:43.337 [2024-11-28 08:50:37.241903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72314 ] 00:06:43.337 [2024-11-28 08:50:37.387480] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.337 [2024-11-28 08:50:37.418049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.277 08:50:38 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.277 08:50:38 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:44.277 08:50:38 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:44.277 08:50:38 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:44.277 08:50:38 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:44.277 08:50:38 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:44.277 08:50:38 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:44.277 08:50:38 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:44.277 08:50:38 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.277 08:50:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.277 08:50:38 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.277 08:50:38 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:44.277 08:50:38 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.277 08:50:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.277 08:50:38 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.277 08:50:38 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:44.277 08:50:38 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:44.277 08:50:38 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.277 08:50:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.539 08:50:38 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.539 08:50:38 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:44.539 08:50:38 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.539 08:50:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.539 08:50:38 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.539 08:50:38 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:44.539 08:50:38 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.539 08:50:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.539 08:50:38 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.539 08:50:38 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:44.539 08:50:38 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:44.539 08:50:38 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:44.539 08:50:38 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.539 08:50:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.539 08:50:38 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.539 08:50:38 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:44.539 08:50:38 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:44.540 08:50:38 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b84cd659-b714-40d5-9bff-b9540f6c18b1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b84cd659-b714-40d5-9bff-b9540f6c18b1",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f3d58848-2734-4458-b5f4-232b46f816ca"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f3d58848-2734-4458-b5f4-232b46f816ca",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "07e52b7f-d22c-4fe3-84f8-a4cc02b9321a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "07e52b7f-d22c-4fe3-84f8-a4cc02b9321a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "259149c4-2bf5-4ac3-bf95-12e85ce18291"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "259149c4-2bf5-4ac3-bf95-12e85ce18291",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "35144b86-33d2-4f6a-a55f-499f417e4ce5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "35144b86-33d2-4f6a-a55f-499f417e4ce5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ceb0228f-8454-435f-b291-b3ff1cab1ea7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ceb0228f-8454-435f-b291-b3ff1cab1ea7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:44.540 08:50:38 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:44.540 08:50:38 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:44.540 08:50:38 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:44.540 08:50:38 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 72314 00:06:44.540 08:50:38 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 72314 ']' 00:06:44.540 08:50:38 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 72314 00:06:44.540 08:50:38 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:44.540 08:50:38 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.540 08:50:38 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72314 00:06:44.540 killing process with pid 72314 00:06:44.540 08:50:38 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.540 08:50:38 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.540 08:50:38 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72314' 00:06:44.540 08:50:38 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 72314 00:06:44.540 08:50:38 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 72314 00:06:44.802 08:50:38 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:44.802 08:50:38 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:44.802 08:50:38 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:44.802 08:50:38 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.802 08:50:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.802 ************************************ 00:06:44.802 START TEST bdev_hello_world 00:06:44.802 ************************************ 00:06:44.802 08:50:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:44.802 [2024-11-28 08:50:38.848146] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:44.802 [2024-11-28 08:50:38.848256] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72387 ] 00:06:45.063 [2024-11-28 08:50:38.996433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.063 [2024-11-28 08:50:39.027274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.373 [2024-11-28 08:50:39.393005] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:45.373 [2024-11-28 08:50:39.393047] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:45.373 [2024-11-28 08:50:39.393069] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:45.373 [2024-11-28 08:50:39.395096] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:45.373 [2024-11-28 08:50:39.395986] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:45.373 [2024-11-28 08:50:39.396032] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:45.373 [2024-11-28 08:50:39.396157] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:45.373 00:06:45.373 [2024-11-28 08:50:39.396177] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:45.659 ************************************ 00:06:45.659 END TEST bdev_hello_world 00:06:45.659 ************************************ 00:06:45.659 00:06:45.659 real 0m0.763s 00:06:45.659 user 0m0.492s 00:06:45.659 sys 0m0.168s 00:06:45.659 08:50:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.659 08:50:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:45.659 08:50:39 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:45.659 08:50:39 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:45.659 08:50:39 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.659 08:50:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.659 ************************************ 00:06:45.659 START TEST bdev_bounds 00:06:45.659 ************************************ 00:06:45.659 Process bdevio pid: 72418 00:06:45.659 08:50:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:45.659 08:50:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72418 00:06:45.659 08:50:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.659 08:50:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:45.659 08:50:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72418' 00:06:45.659 08:50:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72418 00:06:45.659 08:50:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 72418 ']' 00:06:45.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.659 08:50:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.659 08:50:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.659 08:50:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.659 08:50:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.659 08:50:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:45.659 [2024-11-28 08:50:39.655971] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:45.659 [2024-11-28 08:50:39.656231] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72418 ] 00:06:45.921 [2024-11-28 08:50:39.799938] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.921 [2024-11-28 08:50:39.839484] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.921 [2024-11-28 08:50:39.839762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.921 [2024-11-28 08:50:39.839823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.492 08:50:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.492 08:50:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:46.492 08:50:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:46.492 I/O targets: 00:06:46.492 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:46.493 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:46.493 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:46.493 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:46.493 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:46.493 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:46.493 00:06:46.493 00:06:46.493 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.493 http://cunit.sourceforge.net/ 00:06:46.493 00:06:46.493 00:06:46.493 Suite: bdevio tests on: Nvme3n1 00:06:46.493 Test: blockdev write read block ...passed 00:06:46.493 Test: blockdev write zeroes read block ...passed 00:06:46.493 Test: blockdev write zeroes read no split ...passed 00:06:46.755 Test: blockdev write zeroes read split ...passed 00:06:46.755 Test: blockdev write zeroes read split partial ...passed 00:06:46.755 Test: blockdev reset ...[2024-11-28 08:50:40.619856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:46.755 [2024-11-28 08:50:40.621921] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:46.755 passed 00:06:46.755 Test: blockdev write read 8 blocks ...passed 00:06:46.755 Test: blockdev write read size > 128k ...passed 00:06:46.755 Test: blockdev write read invalid size ...passed 00:06:46.755 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.755 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.755 Test: blockdev write read max offset ...passed 00:06:46.755 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.755 Test: blockdev writev readv 8 blocks ...passed 00:06:46.755 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.755 Test: blockdev writev readv block ...passed 00:06:46.755 Test: blockdev writev readv size > 128k ...passed 00:06:46.755 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.755 Test: blockdev comparev and writev ...[2024-11-28 08:50:40.629305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:46.755 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c4406000 len:0x1000 00:06:46.755 [2024-11-28 08:50:40.629509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:46.755 passed 00:06:46.755 Test: blockdev nvme passthru vendor specific ...passed 00:06:46.755 Test: blockdev nvme admin passthru ...[2024-11-28 08:50:40.631084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:46.755 [2024-11-28 08:50:40.631122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:46.755 passed 00:06:46.755 Test: blockdev copy ...passed 00:06:46.755 Suite: bdevio tests on: Nvme2n3 00:06:46.755 Test: blockdev write read block ...passed 00:06:46.755 Test: blockdev write zeroes read block ...passed 00:06:46.755 Test: blockdev write zeroes read no split ...passed 00:06:46.755 Test: blockdev write zeroes read split ...passed 00:06:46.755 Test: blockdev write zeroes read split partial ...passed 00:06:46.755 Test: blockdev reset ...[2024-11-28 08:50:40.658329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:46.755 passed 00:06:46.755 Test: blockdev write read 8 blocks ...[2024-11-28 08:50:40.664713] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:46.755 passed 00:06:46.755 Test: blockdev write read size > 128k ...passed 00:06:46.755 Test: blockdev write read invalid size ...passed 00:06:46.755 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.755 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.755 Test: blockdev write read max offset ...passed 00:06:46.755 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.755 Test: blockdev writev readv 8 blocks ...passed 00:06:46.755 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.755 Test: blockdev writev readv block ...passed 00:06:46.755 Test: blockdev writev readv size > 128k ...passed 00:06:46.755 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.755 Test: blockdev comparev and writev ...[2024-11-28 08:50:40.677126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7005000 len:0x1000 00:06:46.755 [2024-11-28 08:50:40.677167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:46.755 passed 00:06:46.755 Test: blockdev nvme passthru rw ...passed 00:06:46.755 Test: blockdev nvme passthru vendor specific ...passed 00:06:46.755 Test: blockdev nvme admin passthru ...[2024-11-28 08:50:40.678678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:46.755 [2024-11-28 08:50:40.678709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:46.755 passed 00:06:46.755 Test: blockdev copy ...passed 00:06:46.755 Suite: bdevio tests on: Nvme2n2 00:06:46.755 Test: blockdev write read block ...passed 00:06:46.755 Test: blockdev write zeroes read block ...passed 00:06:46.755 Test: blockdev write zeroes read no split ...passed 00:06:46.755 Test: blockdev write zeroes read split ...passed 00:06:46.755 Test: blockdev write zeroes read split partial ...passed 00:06:46.755 Test: blockdev reset ...[2024-11-28 08:50:40.705003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:46.755 [2024-11-28 08:50:40.707192] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:46.755 passed 00:06:46.755 Test: blockdev write read 8 blocks ...passed 00:06:46.755 Test: blockdev write read size > 128k ...passed 00:06:46.755 Test: blockdev write read invalid size ...passed 00:06:46.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.756 Test: blockdev write read max offset ...passed 00:06:46.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.756 Test: blockdev writev readv 8 blocks ...passed 00:06:46.756 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.756 Test: blockdev writev readv block ...passed 00:06:46.756 Test: blockdev writev readv size > 128k ...passed 00:06:46.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.756 Test: blockdev comparev and writev ...[2024-11-28 08:50:40.713568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7436000 len:0x1000 00:06:46.756 [2024-11-28 08:50:40.713609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:46.756 passed 00:06:46.756 Test: blockdev nvme passthru rw ...passed 00:06:46.756 Test: blockdev nvme passthru vendor specific ...passed 00:06:46.756 Test: blockdev nvme admin passthru ...[2024-11-28 08:50:40.714313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:46.756 [2024-11-28 08:50:40.714341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:46.756 passed 00:06:46.756 Test: blockdev copy ...passed 00:06:46.756 Suite: bdevio tests on: Nvme2n1 00:06:46.756 Test: blockdev write read block ...passed 00:06:46.756 Test: blockdev write zeroes read block ...passed 00:06:46.756 Test: blockdev write zeroes read no split ...passed 00:06:46.756 Test: blockdev write zeroes read split ...passed 00:06:46.756 Test: blockdev write zeroes read split partial ...passed 00:06:46.756 Test: blockdev reset ...[2024-11-28 08:50:40.728391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:46.756 passed 00:06:46.756 Test: blockdev write read 8 blocks ...[2024-11-28 08:50:40.730323] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:46.756 passed 00:06:46.756 Test: blockdev write read size > 128k ...passed 00:06:46.756 Test: blockdev write read invalid size ...passed 00:06:46.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.756 Test: blockdev write read max offset ...passed 00:06:46.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.756 Test: blockdev writev readv 8 blocks ...passed 00:06:46.756 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.756 Test: blockdev writev readv block ...passed 00:06:46.756 Test: blockdev writev readv size > 128k ...passed 00:06:46.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.756 Test: blockdev comparev and writev ...[2024-11-28 08:50:40.736482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:46.756 Test: blockdev nvme passthru rw ...passed 00:06:46.756 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2d7430000 len:0x1000 00:06:46.756 [2024-11-28 08:50:40.736628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:46.756 passed 00:06:46.756 Test: blockdev nvme admin passthru ...[2024-11-28 08:50:40.737549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:46.756 [2024-11-28 08:50:40.737581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:46.756 passed 00:06:46.756 Test: blockdev copy ...passed 00:06:46.756 Suite: bdevio tests on: Nvme1n1 00:06:46.756 Test: blockdev write read block ...passed 00:06:46.756 Test: blockdev write zeroes read block ...passed 00:06:46.756 Test: blockdev write zeroes read no split ...passed 00:06:46.756 Test: blockdev write zeroes read split ...passed 00:06:46.756 Test: blockdev write zeroes read split partial ...passed 00:06:46.756 Test: blockdev reset ...[2024-11-28 08:50:40.752692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:46.756 [2024-11-28 08:50:40.755657] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:46.756 passed 00:06:46.756 Test: blockdev write read 8 blocks ...passed 00:06:46.756 Test: blockdev write read size > 128k ...passed 00:06:46.756 Test: blockdev write read invalid size ...passed 00:06:46.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.756 Test: blockdev write read max offset ...passed 00:06:46.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.756 Test: blockdev writev readv 8 blocks ...passed 00:06:46.756 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.756 Test: blockdev writev readv block ...passed 00:06:46.756 Test: blockdev writev readv size > 128k ...passed 00:06:46.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.756 Test: blockdev comparev and writev ...[2024-11-28 08:50:40.766295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d742c000 len:0x1000 00:06:46.756 [2024-11-28 08:50:40.766346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:46.756 passed 00:06:46.756 Test: blockdev nvme passthru rw ...passed 00:06:46.756 Test: blockdev nvme passthru vendor specific ...[2024-11-28 08:50:40.767636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:46.756 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:46.756 [2024-11-28 08:50:40.767765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:46.756 passed 00:06:46.756 Test: blockdev copy ...passed 00:06:46.756 Suite: bdevio tests on: Nvme0n1 00:06:46.756 Test: blockdev write read block ...passed 00:06:46.756 Test: blockdev write zeroes read block ...passed 00:06:46.756 Test: blockdev write zeroes read no split ...passed 00:06:46.756 Test: blockdev write zeroes read split ...passed 00:06:46.756 Test: blockdev write zeroes read split partial ...passed 00:06:46.756 Test: blockdev reset ...[2024-11-28 08:50:40.791824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:46.756 [2024-11-28 08:50:40.794562] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:46.756 passed 00:06:46.756 Test: blockdev write read 8 blocks ...passed 00:06:46.756 Test: blockdev write read size > 128k ...passed 00:06:46.756 Test: blockdev write read invalid size ...passed 00:06:46.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.756 Test: blockdev write read max offset ...passed 00:06:46.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.756 Test: blockdev writev readv 8 blocks ...passed 00:06:46.756 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.756 Test: blockdev writev readv block ...passed 00:06:46.756 Test: blockdev writev readv size > 128k ...passed 00:06:46.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.756 Test: blockdev comparev and writev ...passed 00:06:46.756 Test: blockdev nvme passthru rw ...[2024-11-28 08:50:40.800899] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:46.756 separate metadata which is not supported yet. 00:06:46.756 passed 00:06:46.756 Test: blockdev nvme passthru vendor specific ...[2024-11-28 08:50:40.801436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:46.756 passed 00:06:46.756 Test: blockdev nvme admin passthru ...[2024-11-28 08:50:40.801470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:46.756 passed 00:06:46.756 Test: blockdev copy ...passed 00:06:46.756 00:06:46.756 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.756 suites 6 6 n/a 0 0 00:06:46.756 tests 138 138 138 0 0 00:06:46.756 asserts 893 893 893 0 n/a 00:06:46.756 00:06:46.756 Elapsed time = 0.449 seconds 00:06:46.756 0 00:06:46.756 08:50:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72418 00:06:46.756 08:50:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 72418 ']' 00:06:46.756 08:50:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 72418 00:06:46.756 08:50:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:46.756 08:50:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.756 08:50:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72418 00:06:46.756 08:50:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.756 08:50:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.756 08:50:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72418' 00:06:46.756 killing process with pid 72418 00:06:46.756 08:50:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 72418 00:06:46.756 08:50:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 72418 00:06:47.018 08:50:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:47.018 00:06:47.018 real 0m1.403s 00:06:47.018 user 0m3.530s 00:06:47.018 sys 0m0.257s 00:06:47.018 08:50:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.018 08:50:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:47.018 ************************************ 00:06:47.018 END TEST bdev_bounds 00:06:47.018 ************************************ 00:06:47.018 08:50:41 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:47.018 08:50:41 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:47.018 08:50:41 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.018 08:50:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:47.018 ************************************ 00:06:47.018 START TEST bdev_nbd 00:06:47.018 ************************************ 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:47.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72461 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72461 /var/tmp/spdk-nbd.sock 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 72461 ']' 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:47.018 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:47.018 [2024-11-28 08:50:41.110929] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:47.018 [2024-11-28 08:50:41.111040] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.278 [2024-11-28 08:50:41.260497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.278 [2024-11-28 08:50:41.307984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:48.216 08:50:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.216 1+0 records in 00:06:48.216 1+0 records out 00:06:48.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618999 s, 6.6 MB/s 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:48.216 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.477 1+0 records in 00:06:48.477 1+0 records out 00:06:48.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000852068 s, 4.8 MB/s 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:48.477 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.737 1+0 records in 00:06:48.737 1+0 records out 00:06:48.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000869747 s, 4.7 MB/s 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:48.737 08:50:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.997 1+0 records in 00:06:48.997 1+0 records out 00:06:48.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103804 s, 3.9 MB/s 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:48.997 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.257 1+0 records in 00:06:49.257 1+0 records out 00:06:49.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000782641 s, 5.2 MB/s 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:49.257 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.518 1+0 records in 00:06:49.518 1+0 records out 00:06:49.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010043 s, 4.1 MB/s 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:49.518 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.777 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:49.777 { 00:06:49.777 "nbd_device": "/dev/nbd0", 00:06:49.777 "bdev_name": "Nvme0n1" 00:06:49.777 }, 00:06:49.777 { 00:06:49.777 "nbd_device": "/dev/nbd1", 00:06:49.777 "bdev_name": "Nvme1n1" 00:06:49.777 }, 00:06:49.777 { 00:06:49.777 "nbd_device": "/dev/nbd2", 00:06:49.777 "bdev_name": "Nvme2n1" 00:06:49.777 }, 00:06:49.777 { 00:06:49.777 "nbd_device": "/dev/nbd3", 00:06:49.777 "bdev_name": "Nvme2n2" 00:06:49.777 }, 00:06:49.777 { 00:06:49.777 "nbd_device": "/dev/nbd4", 00:06:49.777 "bdev_name": "Nvme2n3" 00:06:49.777 }, 00:06:49.777 { 00:06:49.777 "nbd_device": "/dev/nbd5", 00:06:49.777 "bdev_name": "Nvme3n1" 00:06:49.777 } 00:06:49.777 ]' 00:06:49.777 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:49.777 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:49.777 { 00:06:49.777 "nbd_device": "/dev/nbd0", 00:06:49.778 "bdev_name": "Nvme0n1" 00:06:49.778 }, 00:06:49.778 { 00:06:49.778 "nbd_device": "/dev/nbd1", 00:06:49.778 "bdev_name": "Nvme1n1" 00:06:49.778 }, 00:06:49.778 { 00:06:49.778 "nbd_device": "/dev/nbd2", 00:06:49.778 "bdev_name": "Nvme2n1" 00:06:49.778 }, 00:06:49.778 { 00:06:49.778 "nbd_device": "/dev/nbd3", 00:06:49.778 "bdev_name": "Nvme2n2" 00:06:49.778 }, 00:06:49.778 { 00:06:49.778 "nbd_device": "/dev/nbd4", 00:06:49.778 "bdev_name": "Nvme2n3" 00:06:49.778 }, 00:06:49.778 { 00:06:49.778 "nbd_device": "/dev/nbd5", 00:06:49.778 "bdev_name": "Nvme3n1" 00:06:49.778 } 00:06:49.778 ]' 00:06:49.778 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:49.778 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:49.778 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.778 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:49.778 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.778 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:49.778 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.778 08:50:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.037 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.037 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.037 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.037 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.037 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.037 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.037 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.037 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.037 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.037 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.297 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.297 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.297 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.297 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.297 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.297 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.297 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.297 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.297 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.297 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:50.557 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:50.557 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:50.557 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:50.557 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.557 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.557 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:50.557 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.557 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.557 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.557 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:50.818 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:50.818 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:50.818 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:50.818 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.818 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.818 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:50.818 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.818 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.818 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.818 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:51.077 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:51.077 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:51.077 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:51.077 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.077 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.077 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:51.077 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.078 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.078 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.078 08:50:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:51.078 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:51.078 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:51.078 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:51.078 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.078 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.078 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:51.078 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.078 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.078 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.078 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.078 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:51.336 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:51.596 /dev/nbd0 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.596 1+0 records in 00:06:51.596 1+0 records out 00:06:51.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00287259 s, 1.4 MB/s 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:51.596 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:51.857 /dev/nbd1 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.857 1+0 records in 00:06:51.857 1+0 records out 00:06:51.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476236 s, 8.6 MB/s 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:51.857 08:50:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:52.116 /dev/nbd10 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.116 1+0 records in 00:06:52.116 1+0 records out 00:06:52.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608112 s, 6.7 MB/s 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:52.116 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:52.375 /dev/nbd11 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.375 1+0 records in 00:06:52.375 1+0 records out 00:06:52.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436712 s, 9.4 MB/s 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:52.375 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:52.632 /dev/nbd12 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.632 1+0 records in 00:06:52.632 1+0 records out 00:06:52.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615851 s, 6.7 MB/s 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:52.632 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:52.891 /dev/nbd13 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.891 1+0 records in 00:06:52.891 1+0 records out 00:06:52.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560568 s, 7.3 MB/s 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.891 08:50:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.149 { 00:06:53.149 "nbd_device": "/dev/nbd0", 00:06:53.149 "bdev_name": "Nvme0n1" 00:06:53.149 }, 00:06:53.149 { 00:06:53.149 "nbd_device": "/dev/nbd1", 00:06:53.149 "bdev_name": "Nvme1n1" 00:06:53.149 }, 00:06:53.149 { 00:06:53.149 "nbd_device": "/dev/nbd10", 00:06:53.149 "bdev_name": "Nvme2n1" 00:06:53.149 }, 00:06:53.149 { 00:06:53.149 "nbd_device": "/dev/nbd11", 00:06:53.149 "bdev_name": "Nvme2n2" 00:06:53.149 }, 00:06:53.149 { 00:06:53.149 "nbd_device": "/dev/nbd12", 00:06:53.149 "bdev_name": "Nvme2n3" 00:06:53.149 }, 00:06:53.149 { 00:06:53.149 "nbd_device": "/dev/nbd13", 00:06:53.149 "bdev_name": "Nvme3n1" 00:06:53.149 } 00:06:53.149 ]' 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.149 { 00:06:53.149 "nbd_device": "/dev/nbd0", 00:06:53.149 "bdev_name": "Nvme0n1" 00:06:53.149 }, 00:06:53.149 { 00:06:53.149 "nbd_device": "/dev/nbd1", 00:06:53.149 "bdev_name": "Nvme1n1" 00:06:53.149 }, 00:06:53.149 { 00:06:53.149 "nbd_device": "/dev/nbd10", 00:06:53.149 "bdev_name": "Nvme2n1" 00:06:53.149 }, 00:06:53.149 { 00:06:53.149 "nbd_device": "/dev/nbd11", 00:06:53.149 "bdev_name": "Nvme2n2" 00:06:53.149 }, 00:06:53.149 { 00:06:53.149 "nbd_device": "/dev/nbd12", 00:06:53.149 "bdev_name": "Nvme2n3" 00:06:53.149 }, 00:06:53.149 { 00:06:53.149 "nbd_device": "/dev/nbd13", 00:06:53.149 "bdev_name": "Nvme3n1" 00:06:53.149 } 00:06:53.149 ]' 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.149 /dev/nbd1 00:06:53.149 /dev/nbd10 00:06:53.149 /dev/nbd11 00:06:53.149 /dev/nbd12 00:06:53.149 /dev/nbd13' 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.149 /dev/nbd1 00:06:53.149 /dev/nbd10 00:06:53.149 /dev/nbd11 00:06:53.149 /dev/nbd12 00:06:53.149 /dev/nbd13' 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:53.149 256+0 records in 00:06:53.149 256+0 records out 00:06:53.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119815 s, 87.5 MB/s 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.149 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:53.410 256+0 records in 00:06:53.410 256+0 records out 00:06:53.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.204221 s, 5.1 MB/s 00:06:53.410 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.410 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:53.410 256+0 records in 00:06:53.410 256+0 records out 00:06:53.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.081519 s, 12.9 MB/s 00:06:53.410 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.410 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:53.410 256+0 records in 00:06:53.410 256+0 records out 00:06:53.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0813829 s, 12.9 MB/s 00:06:53.410 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.410 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:53.670 256+0 records in 00:06:53.670 256+0 records out 00:06:53.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.104325 s, 10.1 MB/s 00:06:53.670 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.670 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:53.670 256+0 records in 00:06:53.670 256+0 records out 00:06:53.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0823557 s, 12.7 MB/s 00:06:53.670 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.670 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:53.929 256+0 records in 00:06:53.929 256+0 records out 00:06:53.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146533 s, 7.2 MB/s 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.929 08:50:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.186 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.187 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.187 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.187 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:54.444 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:54.444 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:54.444 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:54.444 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.444 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.444 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:54.444 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.444 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.444 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.444 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:54.754 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:54.754 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:54.754 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:54.754 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.754 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.754 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:54.754 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.754 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.754 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.755 08:50:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:55.031 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:55.031 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:55.031 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:55.031 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.031 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.031 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:55.031 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.031 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.031 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.031 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:55.292 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:55.292 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:55.292 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:55.292 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.292 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.292 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:55.292 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.292 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.292 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.292 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.292 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:55.553 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:55.813 malloc_lvol_verify 00:06:55.813 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:56.074 79cb2f5d-3797-4251-9b74-cadacdf63300 00:06:56.074 08:50:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:56.335 70db1ce1-7991-4544-9cfd-6d058a6dd0dd 00:06:56.335 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:56.335 /dev/nbd0 00:06:56.595 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:56.595 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:56.595 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:56.595 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:56.595 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:56.595 mke2fs 1.47.0 (5-Feb-2023) 00:06:56.595 Discarding device blocks: 0/4096 done 00:06:56.596 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:56.596 00:06:56.596 Allocating group tables: 0/1 done 00:06:56.596 Writing inode tables: 0/1 done 00:06:56.596 Creating journal (1024 blocks): done 00:06:56.596 Writing superblocks and filesystem accounting information: 0/1 done 00:06:56.596 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72461 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 72461 ']' 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 72461 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72461 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.596 killing process with pid 72461 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72461' 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 72461 00:06:56.596 08:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 72461 00:06:56.856 08:50:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:56.856 00:06:56.856 real 0m9.812s 00:06:56.856 user 0m14.198s 00:06:56.856 sys 0m3.365s 00:06:56.856 08:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.856 08:50:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:56.856 ************************************ 00:06:56.856 END TEST bdev_nbd 00:06:56.856 ************************************ 00:06:56.856 08:50:50 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:56.856 08:50:50 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:56.856 skipping fio tests on NVMe due to multi-ns failures. 00:06:56.856 08:50:50 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:56.856 08:50:50 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:56.856 08:50:50 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:56.856 08:50:50 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:56.856 08:50:50 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.856 08:50:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.856 ************************************ 00:06:56.856 START TEST bdev_verify 00:06:56.856 ************************************ 00:06:56.856 08:50:50 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:56.856 [2024-11-28 08:50:50.949784] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:56.856 [2024-11-28 08:50:50.949881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72842 ] 00:06:57.115 [2024-11-28 08:50:51.085835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.115 [2024-11-28 08:50:51.118159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.115 [2024-11-28 08:50:51.118269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.686 Running I/O for 5 seconds... 00:06:59.567 19008.00 IOPS, 74.25 MiB/s [2024-11-28T08:50:55.068Z] 20448.00 IOPS, 79.88 MiB/s [2024-11-28T08:50:56.010Z] 20864.00 IOPS, 81.50 MiB/s [2024-11-28T08:50:56.954Z] 21072.00 IOPS, 82.31 MiB/s [2024-11-28T08:50:56.954Z] 20876.80 IOPS, 81.55 MiB/s 00:07:02.834 Latency(us) 00:07:02.834 [2024-11-28T08:50:56.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.834 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:02.834 Verification LBA range: start 0x0 length 0xbd0bd 00:07:02.834 Nvme0n1 : 5.09 1734.83 6.78 0.00 0.00 73622.52 14518.74 67754.14 00:07:02.834 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:02.834 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:02.834 Nvme0n1 : 5.10 1707.28 6.67 0.00 0.00 74423.72 16736.89 61704.66 00:07:02.834 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:02.834 Verification LBA range: start 0x0 length 0xa0000 00:07:02.834 Nvme1n1 : 5.09 1734.30 6.77 0.00 0.00 73483.53 16031.11 64931.05 00:07:02.834 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:02.834 Verification LBA range: start 0xa0000 length 0xa0000 00:07:02.834 Nvme1n1 : 5.10 1706.80 6.67 0.00 0.00 74335.35 15526.99 62107.96 00:07:02.834 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:02.834 Verification LBA range: start 0x0 length 0x80000 00:07:02.834 Nvme2n1 : 5.09 1733.51 6.77 0.00 0.00 73410.69 17341.83 63721.16 00:07:02.834 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:02.834 Verification LBA range: start 0x80000 length 0x80000 00:07:02.834 Nvme2n1 : 5.10 1705.96 6.66 0.00 0.00 74244.45 14317.10 62511.26 00:07:02.834 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:02.834 Verification LBA range: start 0x0 length 0x80000 00:07:02.834 Nvme2n2 : 5.10 1732.11 6.77 0.00 0.00 73317.87 17946.78 63721.16 00:07:02.834 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:02.834 Verification LBA range: start 0x80000 length 0x80000 00:07:02.834 Nvme2n2 : 5.09 1709.41 6.68 0.00 0.00 74716.28 14216.27 66544.25 00:07:02.834 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:02.834 Verification LBA range: start 0x0 length 0x80000 00:07:02.834 Nvme2n3 : 5.10 1731.29 6.76 0.00 0.00 73192.68 15426.17 66140.95 00:07:02.834 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:02.834 Verification LBA range: start 0x80000 length 0x80000 00:07:02.834 Nvme2n3 : 5.09 1708.57 6.67 0.00 0.00 74643.21 17140.18 61301.37 00:07:02.834 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:02.834 Verification LBA range: start 0x0 length 0x20000 00:07:02.834 Nvme3n1 : 5.10 1730.47 6.76 0.00 0.00 73098.01 10939.47 68157.44 00:07:02.834 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:02.834 Verification LBA range: start 0x20000 length 0x20000 00:07:02.834 Nvme3n1 : 5.10 1707.75 6.67 0.00 0.00 74529.73 19156.68 60091.47 00:07:02.834 [2024-11-28T08:50:56.954Z] =================================================================================================================== 00:07:02.834 [2024-11-28T08:50:56.954Z] Total : 20642.30 80.63 0.00 0.00 73914.05 10939.47 68157.44 00:07:03.407 00:07:03.407 real 0m6.413s 00:07:03.407 user 0m12.096s 00:07:03.407 sys 0m0.207s 00:07:03.407 08:50:57 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.407 ************************************ 00:07:03.407 END TEST bdev_verify 00:07:03.407 ************************************ 00:07:03.407 08:50:57 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:03.407 08:50:57 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:03.407 08:50:57 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:03.407 08:50:57 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.407 08:50:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.407 ************************************ 00:07:03.407 START TEST bdev_verify_big_io 00:07:03.407 ************************************ 00:07:03.407 08:50:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:03.407 [2024-11-28 08:50:57.455176] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:03.407 [2024-11-28 08:50:57.455335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72929 ] 00:07:03.669 [2024-11-28 08:50:57.609559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.669 [2024-11-28 08:50:57.660864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.669 [2024-11-28 08:50:57.660945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.276 Running I/O for 5 seconds... 00:07:08.539 1564.00 IOPS, 97.75 MiB/s [2024-11-28T08:51:04.570Z] 2059.00 IOPS, 128.69 MiB/s [2024-11-28T08:51:04.570Z] 2161.67 IOPS, 135.10 MiB/s 00:07:10.450 Latency(us) 00:07:10.450 [2024-11-28T08:51:04.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.450 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:10.450 Verification LBA range: start 0x0 length 0xbd0b 00:07:10.450 Nvme0n1 : 5.83 98.87 6.18 0.00 0.00 1234618.77 37506.76 1206669.00 00:07:10.450 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:10.450 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:10.450 Nvme0n1 : 5.76 122.25 7.64 0.00 0.00 1009342.91 50009.01 993727.41 00:07:10.450 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:10.450 Verification LBA range: start 0x0 length 0xa000 00:07:10.450 Nvme1n1 : 5.97 102.74 6.42 0.00 0.00 1150383.20 52832.10 1116330.14 00:07:10.450 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:10.450 Verification LBA range: start 0xa000 length 0xa000 00:07:10.450 Nvme1n1 : 5.87 127.08 7.94 0.00 0.00 950443.37 41539.74 871124.68 00:07:10.450 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:10.450 Verification LBA range: start 0x0 length 0x8000 00:07:10.450 Nvme2n1 : 5.88 95.20 5.95 0.00 0.00 1205242.33 53235.40 1716438.25 00:07:10.450 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:10.450 Verification LBA range: start 0x8000 length 0x8000 00:07:10.450 Nvme2n1 : 5.87 126.87 7.93 0.00 0.00 922610.33 41741.39 871124.68 00:07:10.450 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:10.450 Verification LBA range: start 0x0 length 0x8000 00:07:10.450 Nvme2n2 : 5.97 104.38 6.52 0.00 0.00 1054541.32 81062.99 1342177.28 00:07:10.450 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:10.450 Verification LBA range: start 0x8000 length 0x8000 00:07:10.450 Nvme2n2 : 5.88 126.46 7.90 0.00 0.00 896618.38 40128.20 884030.23 00:07:10.450 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:10.450 Verification LBA range: start 0x0 length 0x8000 00:07:10.450 Nvme2n3 : 6.00 114.42 7.15 0.00 0.00 935261.63 12703.90 1374441.16 00:07:10.450 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:10.450 Verification LBA range: start 0x8000 length 0x8000 00:07:10.450 Nvme2n3 : 5.88 130.56 8.16 0.00 0.00 849659.27 73400.32 896935.78 00:07:10.450 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:10.450 Verification LBA range: start 0x0 length 0x2000 00:07:10.450 Nvme3n1 : 6.04 149.60 9.35 0.00 0.00 697137.00 97.67 1187310.67 00:07:10.450 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:10.450 Verification LBA range: start 0x2000 length 0x2000 00:07:10.450 Nvme3n1 : 5.98 145.48 9.09 0.00 0.00 745958.65 1903.06 909841.33 00:07:10.450 [2024-11-28T08:51:04.570Z] =================================================================================================================== 00:07:10.450 [2024-11-28T08:51:04.570Z] Total : 1443.92 90.24 0.00 0.00 948109.07 97.67 1716438.25 00:07:11.022 00:07:11.022 real 0m7.495s 00:07:11.022 user 0m14.172s 00:07:11.022 sys 0m0.272s 00:07:11.022 08:51:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.022 08:51:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:11.022 ************************************ 00:07:11.022 END TEST bdev_verify_big_io 00:07:11.022 ************************************ 00:07:11.022 08:51:04 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:11.022 08:51:04 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:11.022 08:51:04 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.022 08:51:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:11.022 ************************************ 00:07:11.022 START TEST bdev_write_zeroes 00:07:11.022 ************************************ 00:07:11.022 08:51:04 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:11.022 [2024-11-28 08:51:04.960099] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:11.022 [2024-11-28 08:51:04.960189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73029 ] 00:07:11.022 [2024-11-28 08:51:05.102461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.022 [2024-11-28 08:51:05.132440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.593 Running I/O for 1 seconds... 00:07:12.532 58752.00 IOPS, 229.50 MiB/s 00:07:12.532 Latency(us) 00:07:12.532 [2024-11-28T08:51:06.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.532 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:12.532 Nvme0n1 : 1.02 9771.12 38.17 0.00 0.00 13072.09 9830.40 20467.40 00:07:12.532 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:12.532 Nvme1n1 : 1.02 9759.76 38.12 0.00 0.00 13075.71 9981.64 20669.05 00:07:12.532 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:12.532 Nvme2n1 : 1.02 9748.73 38.08 0.00 0.00 13024.29 9275.86 19963.27 00:07:12.532 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:12.532 Nvme2n2 : 1.03 9737.76 38.04 0.00 0.00 13025.52 9628.75 19459.15 00:07:12.532 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:12.532 Nvme2n3 : 1.03 9726.42 37.99 0.00 0.00 13020.92 9679.16 18955.03 00:07:12.532 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:12.532 Nvme3n1 : 1.03 9715.51 37.95 0.00 0.00 12996.21 8418.86 20467.40 00:07:12.532 [2024-11-28T08:51:06.652Z] =================================================================================================================== 00:07:12.532 [2024-11-28T08:51:06.652Z] Total : 58459.31 228.36 0.00 0.00 13035.79 8418.86 20669.05 00:07:12.792 00:07:12.792 real 0m1.806s 00:07:12.792 user 0m1.530s 00:07:12.792 sys 0m0.167s 00:07:12.792 08:51:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.792 ************************************ 00:07:12.792 08:51:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:12.792 END TEST bdev_write_zeroes 00:07:12.792 ************************************ 00:07:12.792 08:51:06 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:12.792 08:51:06 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:12.792 08:51:06 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.792 08:51:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:12.792 ************************************ 00:07:12.792 START TEST bdev_json_nonenclosed 00:07:12.792 ************************************ 00:07:12.792 08:51:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:12.792 [2024-11-28 08:51:06.815974] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:12.792 [2024-11-28 08:51:06.816084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73060 ] 00:07:13.054 [2024-11-28 08:51:06.962294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.054 [2024-11-28 08:51:06.995930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.054 [2024-11-28 08:51:06.996016] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:13.054 [2024-11-28 08:51:06.996030] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:13.054 [2024-11-28 08:51:06.996041] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.054 00:07:13.054 real 0m0.318s 00:07:13.054 user 0m0.123s 00:07:13.054 sys 0m0.092s 00:07:13.054 ************************************ 00:07:13.054 END TEST bdev_json_nonenclosed 00:07:13.054 ************************************ 00:07:13.054 08:51:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.054 08:51:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:13.054 08:51:07 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:13.054 08:51:07 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:13.054 08:51:07 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.054 08:51:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:13.054 ************************************ 00:07:13.054 START TEST bdev_json_nonarray 00:07:13.054 ************************************ 00:07:13.054 08:51:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:13.314 [2024-11-28 08:51:07.173077] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:13.314 [2024-11-28 08:51:07.173183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73091 ] 00:07:13.314 [2024-11-28 08:51:07.320622] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.314 [2024-11-28 08:51:07.354205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.314 [2024-11-28 08:51:07.354294] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:13.314 [2024-11-28 08:51:07.354309] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:13.314 [2024-11-28 08:51:07.354319] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.578 00:07:13.578 real 0m0.321s 00:07:13.578 user 0m0.129s 00:07:13.578 sys 0m0.089s 00:07:13.578 08:51:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.578 ************************************ 00:07:13.578 END TEST bdev_json_nonarray 00:07:13.578 ************************************ 00:07:13.578 08:51:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:13.578 08:51:07 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:13.578 08:51:07 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:13.578 08:51:07 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:13.578 08:51:07 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:13.578 08:51:07 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:13.578 08:51:07 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:13.578 08:51:07 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:13.578 08:51:07 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:13.578 08:51:07 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:13.578 08:51:07 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:13.578 08:51:07 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:13.578 00:07:13.578 real 0m30.461s 00:07:13.578 user 0m48.195s 00:07:13.578 sys 0m5.290s 00:07:13.578 ************************************ 00:07:13.578 END TEST blockdev_nvme 00:07:13.578 ************************************ 00:07:13.578 08:51:07 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.578 08:51:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:13.578 08:51:07 -- spdk/autotest.sh@209 -- # uname -s 00:07:13.578 08:51:07 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:13.578 08:51:07 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:13.578 08:51:07 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:13.578 08:51:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.578 08:51:07 -- common/autotest_common.sh@10 -- # set +x 00:07:13.578 ************************************ 00:07:13.578 START TEST blockdev_nvme_gpt 00:07:13.578 ************************************ 00:07:13.578 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:13.578 * Looking for test storage... 00:07:13.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:13.578 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:13.578 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:13.578 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:13.578 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:13.578 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.579 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:13.579 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.579 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.579 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.579 08:51:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:13.579 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.579 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:13.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.579 --rc genhtml_branch_coverage=1 00:07:13.579 --rc genhtml_function_coverage=1 00:07:13.579 --rc genhtml_legend=1 00:07:13.579 --rc geninfo_all_blocks=1 00:07:13.579 --rc geninfo_unexecuted_blocks=1 00:07:13.579 00:07:13.579 ' 00:07:13.579 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:13.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.579 --rc genhtml_branch_coverage=1 00:07:13.579 --rc genhtml_function_coverage=1 00:07:13.579 --rc genhtml_legend=1 00:07:13.579 --rc geninfo_all_blocks=1 00:07:13.579 --rc geninfo_unexecuted_blocks=1 00:07:13.579 00:07:13.579 ' 00:07:13.579 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:13.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.579 --rc genhtml_branch_coverage=1 00:07:13.579 --rc genhtml_function_coverage=1 00:07:13.579 --rc genhtml_legend=1 00:07:13.579 --rc geninfo_all_blocks=1 00:07:13.579 --rc geninfo_unexecuted_blocks=1 00:07:13.579 00:07:13.579 ' 00:07:13.579 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:13.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.579 --rc genhtml_branch_coverage=1 00:07:13.579 --rc genhtml_function_coverage=1 00:07:13.579 --rc genhtml_legend=1 00:07:13.579 --rc geninfo_all_blocks=1 00:07:13.579 --rc geninfo_unexecuted_blocks=1 00:07:13.579 00:07:13.579 ' 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73164 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 73164 00:07:13.579 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 73164 ']' 00:07:13.579 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.579 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.579 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.579 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.579 08:51:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.579 08:51:07 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:13.840 [2024-11-28 08:51:07.741769] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:13.840 [2024-11-28 08:51:07.741905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73164 ] 00:07:13.840 [2024-11-28 08:51:07.885439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.840 [2024-11-28 08:51:07.919022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.782 08:51:08 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.782 08:51:08 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:14.782 08:51:08 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:14.782 08:51:08 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:14.782 08:51:08 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:14.782 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:15.042 Waiting for block devices as requested 00:07:15.042 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:15.042 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:15.042 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:15.303 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:20.590 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:20.590 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:20.590 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:20.591 08:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:20.591 BYT; 00:07:20.591 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:20.591 BYT; 00:07:20.591 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:20.591 08:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:21.174 08:51:15 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:21.174 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:21.174 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:21.174 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:21.174 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:21.174 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:21.174 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:21.174 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:21.174 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:21.174 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:21.175 08:51:15 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:21.175 08:51:15 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:21.175 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:21.175 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:21.175 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:21.175 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:21.175 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:21.175 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:21.175 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:21.175 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:21.175 08:51:15 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:21.175 08:51:15 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:21.175 08:51:15 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:23.076 The operation has completed successfully. 00:07:23.076 08:51:16 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:24.010 The operation has completed successfully. 00:07:24.010 08:51:17 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:24.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:24.526 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:24.526 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:24.526 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:24.785 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:24.785 08:51:18 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:24.785 08:51:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.785 08:51:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.785 [] 00:07:24.785 08:51:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.785 08:51:18 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:24.785 08:51:18 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:24.785 08:51:18 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:24.785 08:51:18 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:24.785 08:51:18 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:24.785 08:51:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.785 08:51:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.076 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.076 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:25.076 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.076 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.076 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.076 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:25.076 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.076 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:25.076 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.076 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:25.077 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "342111be-4388-467f-a6fd-f590a72ac66f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "342111be-4388-467f-a6fd-f590a72ac66f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "81fb5130-dcc9-4696-913b-662c9840cb16"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "81fb5130-dcc9-4696-913b-662c9840cb16",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1b30690b-afa9-4c4f-bd05-ec18eddbb007"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1b30690b-afa9-4c4f-bd05-ec18eddbb007",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b0c91cb7-34f5-470d-bae1-ef7c581cfea8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b0c91cb7-34f5-470d-bae1-ef7c581cfea8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4517471a-3f5d-4c89-ad74-a2d3b0de1203"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4517471a-3f5d-4c89-ad74-a2d3b0de1203",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:25.077 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:25.077 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:25.077 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:25.077 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:25.077 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 73164 00:07:25.077 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 73164 ']' 00:07:25.077 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 73164 00:07:25.077 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:25.077 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.077 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73164 00:07:25.336 killing process with pid 73164 00:07:25.336 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.336 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.336 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73164' 00:07:25.336 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 73164 00:07:25.336 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 73164 00:07:25.594 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:25.594 08:51:19 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:25.594 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:25.594 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.594 08:51:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.594 ************************************ 00:07:25.594 START TEST bdev_hello_world 00:07:25.594 ************************************ 00:07:25.594 08:51:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:25.594 [2024-11-28 08:51:19.564102] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:25.594 [2024-11-28 08:51:19.564222] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73791 ] 00:07:25.594 [2024-11-28 08:51:19.707889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.854 [2024-11-28 08:51:19.748092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.114 [2024-11-28 08:51:20.130422] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:26.115 [2024-11-28 08:51:20.130475] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:26.115 [2024-11-28 08:51:20.130495] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:26.115 [2024-11-28 08:51:20.132692] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:26.115 [2024-11-28 08:51:20.133217] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:26.115 [2024-11-28 08:51:20.133267] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:26.115 [2024-11-28 08:51:20.133575] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:26.115 00:07:26.115 [2024-11-28 08:51:20.133609] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:26.376 00:07:26.376 real 0m0.818s 00:07:26.376 user 0m0.535s 00:07:26.376 sys 0m0.179s 00:07:26.376 08:51:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.376 ************************************ 00:07:26.376 END TEST bdev_hello_world 00:07:26.376 ************************************ 00:07:26.376 08:51:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:26.376 08:51:20 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:26.376 08:51:20 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.376 08:51:20 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.376 08:51:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.376 ************************************ 00:07:26.376 START TEST bdev_bounds 00:07:26.376 ************************************ 00:07:26.376 08:51:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:26.376 08:51:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73822 00:07:26.376 Process bdevio pid: 73822 00:07:26.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.377 08:51:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:26.377 08:51:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73822' 00:07:26.377 08:51:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73822 00:07:26.377 08:51:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73822 ']' 00:07:26.377 08:51:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.377 08:51:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:26.377 08:51:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.377 08:51:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.377 08:51:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.377 08:51:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:26.377 [2024-11-28 08:51:20.438556] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:26.377 [2024-11-28 08:51:20.438859] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73822 ] 00:07:26.661 [2024-11-28 08:51:20.582878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.661 [2024-11-28 08:51:20.628005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.661 [2024-11-28 08:51:20.628344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.661 [2024-11-28 08:51:20.628285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.253 08:51:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.254 08:51:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:27.254 08:51:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:27.515 I/O targets: 00:07:27.515 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:27.515 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:27.515 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:27.515 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:27.515 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:27.515 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:27.515 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:27.515 00:07:27.515 00:07:27.515 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.515 http://cunit.sourceforge.net/ 00:07:27.515 00:07:27.515 00:07:27.515 Suite: bdevio tests on: Nvme3n1 00:07:27.515 Test: blockdev write read block ...passed 00:07:27.515 Test: blockdev write zeroes read block ...passed 00:07:27.515 Test: blockdev write zeroes read no split ...passed 00:07:27.515 Test: blockdev write zeroes read split ...passed 00:07:27.515 Test: blockdev write zeroes read split partial ...passed 00:07:27.515 Test: blockdev reset ...[2024-11-28 08:51:21.426941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:27.515 passed 00:07:27.515 Test: blockdev write read 8 blocks ...[2024-11-28 08:51:21.428932] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:27.515 passed 00:07:27.515 Test: blockdev write read size > 128k ...passed 00:07:27.515 Test: blockdev write read invalid size ...passed 00:07:27.515 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:27.515 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:27.515 Test: blockdev write read max offset ...passed 00:07:27.515 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:27.515 Test: blockdev writev readv 8 blocks ...passed 00:07:27.515 Test: blockdev writev readv 30 x 1block ...passed 00:07:27.515 Test: blockdev writev readv block ...passed 00:07:27.515 Test: blockdev writev readv size > 128k ...passed 00:07:27.515 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:27.515 Test: blockdev comparev and writev ...[2024-11-28 08:51:21.437228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfe0e000 len:0x1000 00:07:27.515 [2024-11-28 08:51:21.437277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:27.515 passed 00:07:27.515 Test: blockdev nvme passthru rw ...passed 00:07:27.515 Test: blockdev nvme passthru vendor specific ...passed 00:07:27.516 Test: blockdev nvme admin passthru ...[2024-11-28 08:51:21.439268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:27.516 [2024-11-28 08:51:21.439303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:27.516 passed 00:07:27.516 Test: blockdev copy ...passed 00:07:27.516 Suite: bdevio tests on: Nvme2n3 00:07:27.516 Test: blockdev write read block ...passed 00:07:27.516 Test: blockdev write zeroes read block ...passed 00:07:27.516 Test: blockdev write zeroes read no split ...passed 00:07:27.516 Test: blockdev write zeroes read split ...passed 00:07:27.516 Test: blockdev write zeroes read split partial ...passed 00:07:27.516 Test: blockdev reset ...[2024-11-28 08:51:21.469414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:27.516 [2024-11-28 08:51:21.472334] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:27.516 passed 00:07:27.516 Test: blockdev write read 8 blocks ...passed 00:07:27.516 Test: blockdev write read size > 128k ...passed 00:07:27.516 Test: blockdev write read invalid size ...passed 00:07:27.516 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:27.516 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:27.516 Test: blockdev write read max offset ...passed 00:07:27.516 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:27.516 Test: blockdev writev readv 8 blocks ...passed 00:07:27.516 Test: blockdev writev readv 30 x 1block ...passed 00:07:27.516 Test: blockdev writev readv block ...passed 00:07:27.516 Test: blockdev writev readv size > 128k ...passed 00:07:27.516 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:27.516 Test: blockdev comparev and writev ...[2024-11-28 08:51:21.486546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:07:27.516 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2cfe0a000 len:0x1000 00:07:27.516 [2024-11-28 08:51:21.486673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:27.516 passed 00:07:27.516 Test: blockdev nvme passthru vendor specific ...[2024-11-28 08:51:21.488313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:27.516 [2024-11-28 08:51:21.488345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:27.516 passed 00:07:27.516 Test: blockdev nvme admin passthru ...passed 00:07:27.516 Test: blockdev copy ...passed 00:07:27.516 Suite: bdevio tests on: Nvme2n2 00:07:27.516 Test: blockdev write read block ...passed 00:07:27.516 Test: blockdev write zeroes read block ...passed 00:07:27.516 Test: blockdev write zeroes read no split ...passed 00:07:27.516 Test: blockdev write zeroes read split ...passed 00:07:27.516 Test: blockdev write zeroes read split partial ...passed 00:07:27.516 Test: blockdev reset ...[2024-11-28 08:51:21.507226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:27.516 [2024-11-28 08:51:21.510118] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:27.516 passed 00:07:27.516 Test: blockdev write read 8 blocks ...passed 00:07:27.516 Test: blockdev write read size > 128k ...passed 00:07:27.516 Test: blockdev write read invalid size ...passed 00:07:27.516 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:27.516 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:27.516 Test: blockdev write read max offset ...passed 00:07:27.516 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:27.516 Test: blockdev writev readv 8 blocks ...passed 00:07:27.516 Test: blockdev writev readv 30 x 1block ...passed 00:07:27.516 Test: blockdev writev readv block ...passed 00:07:27.516 Test: blockdev writev readv size > 128k ...passed 00:07:27.516 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:27.516 Test: blockdev comparev and writev ...[2024-11-28 08:51:21.523138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2405000 len:0x1000 00:07:27.516 [2024-11-28 08:51:21.523175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:27.516 passed 00:07:27.516 Test: blockdev nvme passthru rw ...passed 00:07:27.516 Test: blockdev nvme passthru vendor specific ...[2024-11-28 08:51:21.525000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:27.516 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:27.516 [2024-11-28 08:51:21.525097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:27.516 passed 00:07:27.516 Test: blockdev copy ...passed 00:07:27.516 Suite: bdevio tests on: Nvme2n1 00:07:27.516 Test: blockdev write read block ...passed 00:07:27.516 Test: blockdev write zeroes read block ...passed 00:07:27.516 Test: blockdev write zeroes read no split ...passed 00:07:27.516 Test: blockdev write zeroes read split ...passed 00:07:27.516 Test: blockdev write zeroes read split partial ...passed 00:07:27.516 Test: blockdev reset ...[2024-11-28 08:51:21.546940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:27.516 [2024-11-28 08:51:21.549162] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:27.516 passed 00:07:27.516 Test: blockdev write read 8 blocks ...passed 00:07:27.516 Test: blockdev write read size > 128k ...passed 00:07:27.516 Test: blockdev write read invalid size ...passed 00:07:27.516 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:27.516 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:27.516 Test: blockdev write read max offset ...passed 00:07:27.516 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:27.516 Test: blockdev writev readv 8 blocks ...passed 00:07:27.516 Test: blockdev writev readv 30 x 1block ...passed 00:07:27.516 Test: blockdev writev readv block ...passed 00:07:27.516 Test: blockdev writev readv size > 128k ...passed 00:07:27.516 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:27.516 Test: blockdev comparev and writev ...[2024-11-28 08:51:21.564306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3602000 len:0x1000 00:07:27.516 [2024-11-28 08:51:21.564469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:27.516 passed 00:07:27.516 Test: blockdev nvme passthru rw ...passed 00:07:27.516 Test: blockdev nvme passthru vendor specific ...passed 00:07:27.516 Test: blockdev nvme admin passthru ...[2024-11-28 08:51:21.566113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:27.516 [2024-11-28 08:51:21.566152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:27.516 passed 00:07:27.516 Test: blockdev copy ...passed 00:07:27.516 Suite: bdevio tests on: Nvme1n1p2 00:07:27.516 Test: blockdev write read block ...passed 00:07:27.516 Test: blockdev write zeroes read block ...passed 00:07:27.516 Test: blockdev write zeroes read no split ...passed 00:07:27.516 Test: blockdev write zeroes read split ...passed 00:07:27.516 Test: blockdev write zeroes read split partial ...passed 00:07:27.516 Test: blockdev reset ...[2024-11-28 08:51:21.585472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:27.516 [2024-11-28 08:51:21.588845] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:27.516 passed 00:07:27.516 Test: blockdev write read 8 blocks ...passed 00:07:27.516 Test: blockdev write read size > 128k ...passed 00:07:27.516 Test: blockdev write read invalid size ...passed 00:07:27.516 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:27.516 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:27.516 Test: blockdev write read max offset ...passed 00:07:27.516 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:27.516 Test: blockdev writev readv 8 blocks ...passed 00:07:27.516 Test: blockdev writev readv 30 x 1block ...passed 00:07:27.516 Test: blockdev writev readv block ...passed 00:07:27.516 Test: blockdev writev readv size > 128k ...passed 00:07:27.516 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:27.516 Test: blockdev comparev and writev ...[2024-11-28 08:51:21.604715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d5e3b000 len:0x1000 00:07:27.516 [2024-11-28 08:51:21.604754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:27.516 passed 00:07:27.516 Test: blockdev nvme passthru rw ...passed 00:07:27.516 Test: blockdev nvme passthru vendor specific ...passed 00:07:27.516 Test: blockdev nvme admin passthru ...passed 00:07:27.516 Test: blockdev copy ...passed 00:07:27.516 Suite: bdevio tests on: Nvme1n1p1 00:07:27.516 Test: blockdev write read block ...passed 00:07:27.516 Test: blockdev write zeroes read block ...passed 00:07:27.516 Test: blockdev write zeroes read no split ...passed 00:07:27.516 Test: blockdev write zeroes read split ...passed 00:07:27.516 Test: blockdev write zeroes read split partial ...passed 00:07:27.516 Test: blockdev reset ...[2024-11-28 08:51:21.624089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:27.516 passed 00:07:27.516 Test: blockdev write read 8 blocks ...[2024-11-28 08:51:21.626694] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:27.516 passed 00:07:27.516 Test: blockdev write read size > 128k ...passed 00:07:27.516 Test: blockdev write read invalid size ...passed 00:07:27.516 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:27.516 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:27.516 Test: blockdev write read max offset ...passed 00:07:27.778 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:27.778 Test: blockdev writev readv 8 blocks ...passed 00:07:27.778 Test: blockdev writev readv 30 x 1block ...passed 00:07:27.778 Test: blockdev writev readv block ...passed 00:07:27.778 Test: blockdev writev readv size > 128k ...passed 00:07:27.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:27.779 Test: blockdev comparev and writev ...[2024-11-28 08:51:21.642989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2d5e37000 len:0x1000 00:07:27.779 [2024-11-28 08:51:21.643297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:27.779 passed 00:07:27.779 Test: blockdev nvme passthru rw ...passed 00:07:27.779 Test: blockdev nvme passthru vendor specific ...passed 00:07:27.779 Test: blockdev nvme admin passthru ...passed 00:07:27.779 Test: blockdev copy ...passed 00:07:27.779 Suite: bdevio tests on: Nvme0n1 00:07:27.779 Test: blockdev write read block ...passed 00:07:27.779 Test: blockdev write zeroes read block ...passed 00:07:27.779 Test: blockdev write zeroes read no split ...passed 00:07:27.779 Test: blockdev write zeroes read split ...passed 00:07:27.779 Test: blockdev write zeroes read split partial ...passed 00:07:27.779 Test: blockdev reset ...[2024-11-28 08:51:21.663193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:27.779 passed 00:07:27.779 Test: blockdev write read 8 blocks ...[2024-11-28 08:51:21.664891] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:27.779 passed 00:07:27.779 Test: blockdev write read size > 128k ...passed 00:07:27.779 Test: blockdev write read invalid size ...passed 00:07:27.779 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:27.779 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:27.779 Test: blockdev write read max offset ...passed 00:07:27.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:27.779 Test: blockdev writev readv 8 blocks ...passed 00:07:27.779 Test: blockdev writev readv 30 x 1block ...passed 00:07:27.779 Test: blockdev writev readv block ...passed 00:07:27.779 Test: blockdev writev readv size > 128k ...passed 00:07:27.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:27.779 Test: blockdev comparev and writev ...passed 00:07:27.779 Test: blockdev nvme passthru rw ...[2024-11-28 08:51:21.676309] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:27.779 separate metadata which is not supported yet. 00:07:27.779 passed 00:07:27.779 Test: blockdev nvme passthru vendor specific ...passed 00:07:27.779 Test: blockdev nvme admin passthru ...[2024-11-28 08:51:21.677430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:27.779 [2024-11-28 08:51:21.677472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:27.779 passed 00:07:27.779 Test: blockdev copy ...passed 00:07:27.779 00:07:27.779 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.779 suites 7 7 n/a 0 0 00:07:27.779 tests 161 161 161 0 0 00:07:27.779 asserts 1025 1025 1025 0 n/a 00:07:27.779 00:07:27.779 Elapsed time = 0.623 seconds 00:07:27.779 0 00:07:27.779 08:51:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73822 00:07:27.779 08:51:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73822 ']' 00:07:27.779 08:51:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73822 00:07:27.779 08:51:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:27.779 08:51:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.779 08:51:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73822 00:07:27.779 08:51:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.779 08:51:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.779 08:51:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73822' 00:07:27.779 killing process with pid 73822 00:07:27.779 08:51:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73822 00:07:27.779 08:51:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73822 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:28.042 00:07:28.042 real 0m1.535s 00:07:28.042 user 0m3.815s 00:07:28.042 sys 0m0.285s 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:28.042 ************************************ 00:07:28.042 END TEST bdev_bounds 00:07:28.042 ************************************ 00:07:28.042 08:51:21 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:28.042 08:51:21 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:28.042 08:51:21 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.042 08:51:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:28.042 ************************************ 00:07:28.042 START TEST bdev_nbd 00:07:28.042 ************************************ 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73865 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73865 /var/tmp/spdk-nbd.sock 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73865 ']' 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:28.042 08:51:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:28.042 [2024-11-28 08:51:22.021857] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:28.042 [2024-11-28 08:51:22.022115] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.304 [2024-11-28 08:51:22.169072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.304 [2024-11-28 08:51:22.216888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:28.873 08:51:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.131 1+0 records in 00:07:29.131 1+0 records out 00:07:29.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450484 s, 9.1 MB/s 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:29.131 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.390 1+0 records in 00:07:29.390 1+0 records out 00:07:29.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555794 s, 7.4 MB/s 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:29.390 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.650 1+0 records in 00:07:29.650 1+0 records out 00:07:29.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373254 s, 11.0 MB/s 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:29.650 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.910 1+0 records in 00:07:29.910 1+0 records out 00:07:29.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793563 s, 5.2 MB/s 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:29.910 08:51:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:29.910 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.170 1+0 records in 00:07:30.170 1+0 records out 00:07:30.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487493 s, 8.4 MB/s 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.170 1+0 records in 00:07:30.170 1+0 records out 00:07:30.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387173 s, 10.6 MB/s 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:30.170 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.430 1+0 records in 00:07:30.430 1+0 records out 00:07:30.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560084 s, 7.3 MB/s 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:30.430 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:30.431 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.691 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:30.691 { 00:07:30.691 "nbd_device": "/dev/nbd0", 00:07:30.691 "bdev_name": "Nvme0n1" 00:07:30.691 }, 00:07:30.691 { 00:07:30.691 "nbd_device": "/dev/nbd1", 00:07:30.691 "bdev_name": "Nvme1n1p1" 00:07:30.691 }, 00:07:30.691 { 00:07:30.691 "nbd_device": "/dev/nbd2", 00:07:30.691 "bdev_name": "Nvme1n1p2" 00:07:30.691 }, 00:07:30.691 { 00:07:30.691 "nbd_device": "/dev/nbd3", 00:07:30.691 "bdev_name": "Nvme2n1" 00:07:30.691 }, 00:07:30.691 { 00:07:30.691 "nbd_device": "/dev/nbd4", 00:07:30.691 "bdev_name": "Nvme2n2" 00:07:30.691 }, 00:07:30.691 { 00:07:30.691 "nbd_device": "/dev/nbd5", 00:07:30.691 "bdev_name": "Nvme2n3" 00:07:30.691 }, 00:07:30.691 { 00:07:30.691 "nbd_device": "/dev/nbd6", 00:07:30.691 "bdev_name": "Nvme3n1" 00:07:30.691 } 00:07:30.691 ]' 00:07:30.691 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:30.691 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:30.691 { 00:07:30.691 "nbd_device": "/dev/nbd0", 00:07:30.691 "bdev_name": "Nvme0n1" 00:07:30.691 }, 00:07:30.691 { 00:07:30.691 "nbd_device": "/dev/nbd1", 00:07:30.691 "bdev_name": "Nvme1n1p1" 00:07:30.691 }, 00:07:30.691 { 00:07:30.691 "nbd_device": "/dev/nbd2", 00:07:30.691 "bdev_name": "Nvme1n1p2" 00:07:30.691 }, 00:07:30.691 { 00:07:30.691 "nbd_device": "/dev/nbd3", 00:07:30.691 "bdev_name": "Nvme2n1" 00:07:30.691 }, 00:07:30.691 { 00:07:30.691 "nbd_device": "/dev/nbd4", 00:07:30.691 "bdev_name": "Nvme2n2" 00:07:30.691 }, 00:07:30.691 { 00:07:30.691 "nbd_device": "/dev/nbd5", 00:07:30.691 "bdev_name": "Nvme2n3" 00:07:30.691 }, 00:07:30.691 { 00:07:30.691 "nbd_device": "/dev/nbd6", 00:07:30.691 "bdev_name": "Nvme3n1" 00:07:30.691 } 00:07:30.691 ]' 00:07:30.691 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:30.691 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:30.691 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.691 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:30.691 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:30.691 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:30.691 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.691 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:30.950 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:30.950 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:30.951 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:30.951 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.951 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.951 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:30.951 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:30.951 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.951 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.951 08:51:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:31.210 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:31.210 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:31.210 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:31.210 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.210 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.210 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:31.210 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.210 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.210 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.210 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.472 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:31.734 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.734 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.734 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.734 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:31.734 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:31.734 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:31.734 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:31.734 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.734 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.734 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:31.734 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.734 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.734 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.734 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:31.997 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:31.997 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:31.997 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:31.997 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.997 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.997 08:51:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:31.997 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.997 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.997 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.997 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:32.258 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:32.259 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:32.259 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:32.259 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.259 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.259 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:32.259 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:32.259 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.259 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.259 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.259 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:32.519 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:32.520 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:32.780 /dev/nbd0 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:32.780 1+0 records in 00:07:32.780 1+0 records out 00:07:32.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317507 s, 12.9 MB/s 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:32.780 /dev/nbd1 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:32.780 1+0 records in 00:07:32.780 1+0 records out 00:07:32.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358381 s, 11.4 MB/s 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:32.780 08:51:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:33.040 /dev/nbd10 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.040 1+0 records in 00:07:33.040 1+0 records out 00:07:33.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595805 s, 6.9 MB/s 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:33.040 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:33.301 /dev/nbd11 00:07:33.301 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:33.301 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:33.301 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:33.301 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.302 1+0 records in 00:07:33.302 1+0 records out 00:07:33.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468517 s, 8.7 MB/s 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:33.302 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:33.563 /dev/nbd12 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.563 1+0 records in 00:07:33.563 1+0 records out 00:07:33.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510312 s, 8.0 MB/s 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:33.563 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:33.823 /dev/nbd13 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.823 1+0 records in 00:07:33.823 1+0 records out 00:07:33.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443731 s, 9.2 MB/s 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:33.823 08:51:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:34.084 /dev/nbd14 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:34.084 1+0 records in 00:07:34.084 1+0 records out 00:07:34.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412302 s, 9.9 MB/s 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.084 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.344 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:34.344 { 00:07:34.344 "nbd_device": "/dev/nbd0", 00:07:34.344 "bdev_name": "Nvme0n1" 00:07:34.344 }, 00:07:34.344 { 00:07:34.344 "nbd_device": "/dev/nbd1", 00:07:34.344 "bdev_name": "Nvme1n1p1" 00:07:34.344 }, 00:07:34.344 { 00:07:34.344 "nbd_device": "/dev/nbd10", 00:07:34.344 "bdev_name": "Nvme1n1p2" 00:07:34.344 }, 00:07:34.344 { 00:07:34.344 "nbd_device": "/dev/nbd11", 00:07:34.344 "bdev_name": "Nvme2n1" 00:07:34.344 }, 00:07:34.344 { 00:07:34.344 "nbd_device": "/dev/nbd12", 00:07:34.344 "bdev_name": "Nvme2n2" 00:07:34.344 }, 00:07:34.344 { 00:07:34.344 "nbd_device": "/dev/nbd13", 00:07:34.344 "bdev_name": "Nvme2n3" 00:07:34.344 }, 00:07:34.344 { 00:07:34.344 "nbd_device": "/dev/nbd14", 00:07:34.344 "bdev_name": "Nvme3n1" 00:07:34.344 } 00:07:34.344 ]' 00:07:34.344 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:34.344 { 00:07:34.344 "nbd_device": "/dev/nbd0", 00:07:34.344 "bdev_name": "Nvme0n1" 00:07:34.344 }, 00:07:34.344 { 00:07:34.344 "nbd_device": "/dev/nbd1", 00:07:34.344 "bdev_name": "Nvme1n1p1" 00:07:34.344 }, 00:07:34.344 { 00:07:34.344 "nbd_device": "/dev/nbd10", 00:07:34.344 "bdev_name": "Nvme1n1p2" 00:07:34.344 }, 00:07:34.344 { 00:07:34.344 "nbd_device": "/dev/nbd11", 00:07:34.344 "bdev_name": "Nvme2n1" 00:07:34.344 }, 00:07:34.344 { 00:07:34.344 "nbd_device": "/dev/nbd12", 00:07:34.344 "bdev_name": "Nvme2n2" 00:07:34.344 }, 00:07:34.344 { 00:07:34.344 "nbd_device": "/dev/nbd13", 00:07:34.344 "bdev_name": "Nvme2n3" 00:07:34.344 }, 00:07:34.344 { 00:07:34.344 "nbd_device": "/dev/nbd14", 00:07:34.345 "bdev_name": "Nvme3n1" 00:07:34.345 } 00:07:34.345 ]' 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:34.345 /dev/nbd1 00:07:34.345 /dev/nbd10 00:07:34.345 /dev/nbd11 00:07:34.345 /dev/nbd12 00:07:34.345 /dev/nbd13 00:07:34.345 /dev/nbd14' 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:34.345 /dev/nbd1 00:07:34.345 /dev/nbd10 00:07:34.345 /dev/nbd11 00:07:34.345 /dev/nbd12 00:07:34.345 /dev/nbd13 00:07:34.345 /dev/nbd14' 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:34.345 256+0 records in 00:07:34.345 256+0 records out 00:07:34.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00830613 s, 126 MB/s 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:34.345 256+0 records in 00:07:34.345 256+0 records out 00:07:34.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.077351 s, 13.6 MB/s 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:34.345 256+0 records in 00:07:34.345 256+0 records out 00:07:34.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0857524 s, 12.2 MB/s 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.345 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:34.605 256+0 records in 00:07:34.605 256+0 records out 00:07:34.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0792731 s, 13.2 MB/s 00:07:34.605 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.605 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:34.605 256+0 records in 00:07:34.605 256+0 records out 00:07:34.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0827446 s, 12.7 MB/s 00:07:34.605 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.605 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:34.605 256+0 records in 00:07:34.605 256+0 records out 00:07:34.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0749988 s, 14.0 MB/s 00:07:34.605 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.605 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:34.864 256+0 records in 00:07:34.864 256+0 records out 00:07:34.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0783501 s, 13.4 MB/s 00:07:34.864 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.864 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:34.864 256+0 records in 00:07:34.864 256+0 records out 00:07:34.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.077939 s, 13.5 MB/s 00:07:34.864 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:34.864 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.865 08:51:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:35.124 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:35.124 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:35.124 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:35.125 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.125 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.125 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:35.125 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.125 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.125 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.125 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:35.384 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:35.384 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:35.384 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:35.384 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.384 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.384 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:35.384 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.384 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.384 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.384 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.645 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:35.906 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:35.906 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:35.906 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:35.906 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.906 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.906 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:35.906 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.906 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.906 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.906 08:51:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:36.166 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:36.166 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:36.166 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:36.166 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.166 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.166 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:36.166 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:36.166 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.166 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.166 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:36.427 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:36.427 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:36.427 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:36.427 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.427 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.427 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:36.427 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:36.427 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.427 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:36.427 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.427 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:36.688 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:36.688 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:36.688 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:36.688 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:36.688 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:36.688 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:36.688 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:36.688 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:36.688 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:36.688 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:36.688 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:36.688 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:36.689 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:36.689 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.689 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:36.689 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:36.950 malloc_lvol_verify 00:07:36.950 08:51:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:36.950 63ab16af-4f2e-49ed-85c2-56ba2be23666 00:07:36.950 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:37.211 820bad23-85d6-45c0-bd37-e3ccf4094794 00:07:37.211 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:37.473 /dev/nbd0 00:07:37.473 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:37.473 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:37.473 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:37.473 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:37.473 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:37.473 mke2fs 1.47.0 (5-Feb-2023) 00:07:37.473 Discarding device blocks: 0/4096 done 00:07:37.473 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:37.473 00:07:37.473 Allocating group tables: 0/1 done 00:07:37.473 Writing inode tables: 0/1 done 00:07:37.473 Creating journal (1024 blocks): done 00:07:37.473 Writing superblocks and filesystem accounting information: 0/1 done 00:07:37.473 00:07:37.473 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:37.473 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.473 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:37.473 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.473 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:37.473 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.473 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73865 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73865 ']' 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73865 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73865 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73865' 00:07:37.733 killing process with pid 73865 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73865 00:07:37.733 08:51:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73865 00:07:37.993 08:51:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:37.993 00:07:37.993 real 0m9.979s 00:07:37.993 user 0m14.533s 00:07:37.993 sys 0m3.348s 00:07:37.993 08:51:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.993 08:51:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:37.993 ************************************ 00:07:37.993 END TEST bdev_nbd 00:07:37.993 ************************************ 00:07:37.993 08:51:31 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:37.993 08:51:31 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:37.993 08:51:31 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:37.993 skipping fio tests on NVMe due to multi-ns failures. 00:07:37.993 08:51:31 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:37.993 08:51:31 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:37.994 08:51:31 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:37.994 08:51:31 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:37.994 08:51:31 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.994 08:51:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.994 ************************************ 00:07:37.994 START TEST bdev_verify 00:07:37.994 ************************************ 00:07:37.994 08:51:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:37.994 [2024-11-28 08:51:32.040402] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:37.994 [2024-11-28 08:51:32.040516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74270 ] 00:07:38.254 [2024-11-28 08:51:32.186298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:38.254 [2024-11-28 08:51:32.228786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.254 [2024-11-28 08:51:32.228819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.825 Running I/O for 5 seconds... 00:07:40.778 21760.00 IOPS, 85.00 MiB/s [2024-11-28T08:51:36.314Z] 22688.00 IOPS, 88.62 MiB/s [2024-11-28T08:51:37.248Z] 23594.67 IOPS, 92.17 MiB/s [2024-11-28T08:51:37.814Z] 23872.00 IOPS, 93.25 MiB/s [2024-11-28T08:51:37.814Z] 23385.60 IOPS, 91.35 MiB/s 00:07:43.694 Latency(us) 00:07:43.694 [2024-11-28T08:51:37.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.694 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:43.695 Verification LBA range: start 0x0 length 0xbd0bd 00:07:43.695 Nvme0n1 : 5.05 1648.42 6.44 0.00 0.00 77287.73 16636.06 88725.66 00:07:43.695 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:43.695 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:43.695 Nvme0n1 : 5.07 1639.98 6.41 0.00 0.00 77814.61 15325.34 89935.56 00:07:43.695 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:43.695 Verification LBA range: start 0x0 length 0x4ff80 00:07:43.695 Nvme1n1p1 : 5.07 1653.94 6.46 0.00 0.00 76918.35 7864.32 76626.71 00:07:43.695 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:43.695 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:43.695 Nvme1n1p1 : 5.08 1638.88 6.40 0.00 0.00 77664.72 17341.83 77433.30 00:07:43.695 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:43.695 Verification LBA range: start 0x0 length 0x4ff7f 00:07:43.695 Nvme1n1p2 : 5.07 1653.51 6.46 0.00 0.00 76753.66 7713.08 70577.23 00:07:43.695 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:43.695 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:43.695 Nvme1n1p2 : 5.08 1638.41 6.40 0.00 0.00 77549.87 18450.90 70980.53 00:07:43.695 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:43.695 Verification LBA range: start 0x0 length 0x80000 00:07:43.695 Nvme2n1 : 5.07 1653.15 6.46 0.00 0.00 76670.49 7763.50 68560.74 00:07:43.695 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:43.695 Verification LBA range: start 0x80000 length 0x80000 00:07:43.695 Nvme2n1 : 5.08 1637.92 6.40 0.00 0.00 77384.76 17745.13 67350.84 00:07:43.695 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:43.695 Verification LBA range: start 0x0 length 0x80000 00:07:43.695 Nvme2n2 : 5.08 1663.19 6.50 0.00 0.00 76205.57 5545.35 70980.53 00:07:43.695 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:43.695 Verification LBA range: start 0x80000 length 0x80000 00:07:43.695 Nvme2n2 : 5.08 1636.76 6.39 0.00 0.00 77227.94 17140.18 68964.04 00:07:43.695 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:43.695 Verification LBA range: start 0x0 length 0x80000 00:07:43.695 Nvme2n3 : 5.08 1662.30 6.49 0.00 0.00 76088.34 7662.67 74206.92 00:07:43.695 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:43.695 Verification LBA range: start 0x80000 length 0x80000 00:07:43.695 Nvme2n3 : 5.09 1635.64 6.39 0.00 0.00 77070.74 14216.27 72997.02 00:07:43.695 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:43.695 Verification LBA range: start 0x0 length 0x20000 00:07:43.695 Nvme3n1 : 5.08 1661.77 6.49 0.00 0.00 75963.64 8469.27 77030.01 00:07:43.695 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:43.695 Verification LBA range: start 0x20000 length 0x20000 00:07:43.695 Nvme3n1 : 5.10 1654.95 6.46 0.00 0.00 76163.49 6452.78 76626.71 00:07:43.695 [2024-11-28T08:51:37.815Z] =================================================================================================================== 00:07:43.695 [2024-11-28T08:51:37.815Z] Total : 23078.79 90.15 0.00 0.00 76907.97 5545.35 89935.56 00:07:46.978 00:07:46.978 real 0m8.889s 00:07:46.978 user 0m16.975s 00:07:46.978 sys 0m0.269s 00:07:46.978 08:51:40 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.978 08:51:40 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:46.978 ************************************ 00:07:46.978 END TEST bdev_verify 00:07:46.978 ************************************ 00:07:46.978 08:51:40 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:46.978 08:51:40 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:46.978 08:51:40 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.978 08:51:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.978 ************************************ 00:07:46.978 START TEST bdev_verify_big_io 00:07:46.978 ************************************ 00:07:46.978 08:51:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:46.978 [2024-11-28 08:51:40.964855] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:46.978 [2024-11-28 08:51:40.964965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74363 ] 00:07:47.235 [2024-11-28 08:51:41.111644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:47.235 [2024-11-28 08:51:41.152967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.235 [2024-11-28 08:51:41.153032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.493 Running I/O for 5 seconds... 00:07:53.594 1995.00 IOPS, 124.69 MiB/s [2024-11-28T08:51:48.279Z] 3923.00 IOPS, 245.19 MiB/s 00:07:54.159 Latency(us) 00:07:54.159 [2024-11-28T08:51:48.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.159 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:54.159 Verification LBA range: start 0x0 length 0xbd0b 00:07:54.159 Nvme0n1 : 5.62 136.72 8.54 0.00 0.00 908342.48 13812.97 909841.33 00:07:54.159 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:54.159 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:54.159 Nvme0n1 : 5.96 59.08 3.69 0.00 0.00 2062800.88 14922.04 2645637.91 00:07:54.159 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:54.159 Verification LBA range: start 0x0 length 0x4ff8 00:07:54.159 Nvme1n1p1 : 5.70 138.50 8.66 0.00 0.00 875768.23 77030.01 774333.05 00:07:54.159 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:54.159 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:54.159 Nvme1n1p1 : 5.89 87.58 5.47 0.00 0.00 1345755.77 64527.75 1393799.48 00:07:54.159 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:54.159 Verification LBA range: start 0x0 length 0x4ff7 00:07:54.159 Nvme1n1p2 : 5.70 131.07 8.19 0.00 0.00 906569.86 77030.01 1568024.42 00:07:54.159 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:54.159 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:54.159 Nvme1n1p2 : 5.89 89.76 5.61 0.00 0.00 1257888.85 35691.91 1335724.50 00:07:54.159 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:54.159 Verification LBA range: start 0x0 length 0x8000 00:07:54.159 Nvme2n1 : 5.83 143.23 8.95 0.00 0.00 812050.87 88725.66 1142141.24 00:07:54.159 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:54.159 Verification LBA range: start 0x8000 length 0x8000 00:07:54.160 Nvme2n1 : 5.92 96.61 6.04 0.00 0.00 1124514.21 25306.98 1400252.26 00:07:54.160 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:54.160 Verification LBA range: start 0x0 length 0x8000 00:07:54.160 Nvme2n2 : 5.79 142.91 8.93 0.00 0.00 793873.54 78239.90 935652.43 00:07:54.160 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:54.160 Verification LBA range: start 0x8000 length 0x8000 00:07:54.160 Nvme2n2 : 5.98 111.37 6.96 0.00 0.00 949050.67 19761.62 1406705.03 00:07:54.160 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:54.160 Verification LBA range: start 0x0 length 0x8000 00:07:54.160 Nvme2n3 : 5.85 153.96 9.62 0.00 0.00 728008.69 2545.82 942105.21 00:07:54.160 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:54.160 Verification LBA range: start 0x8000 length 0x8000 00:07:54.160 Nvme2n3 : 6.18 175.54 10.97 0.00 0.00 580689.79 7965.14 1413157.81 00:07:54.160 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:54.160 Verification LBA range: start 0x0 length 0x2000 00:07:54.160 Nvme3n1 : 5.85 158.36 9.90 0.00 0.00 691458.36 3049.94 961463.53 00:07:54.160 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:54.160 Verification LBA range: start 0x2000 length 0x2000 00:07:54.160 Nvme3n1 : 6.37 301.62 18.85 0.00 0.00 326675.93 248.91 1432516.14 00:07:54.160 [2024-11-28T08:51:48.280Z] =================================================================================================================== 00:07:54.160 [2024-11-28T08:51:48.280Z] Total : 1926.30 120.39 0.00 0.00 814697.54 248.91 2645637.91 00:07:55.093 00:07:55.093 real 0m8.094s 00:07:55.093 user 0m15.425s 00:07:55.093 sys 0m0.230s 00:07:55.093 08:51:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.093 08:51:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:55.093 ************************************ 00:07:55.094 END TEST bdev_verify_big_io 00:07:55.094 ************************************ 00:07:55.094 08:51:49 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:55.094 08:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:55.094 08:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.094 08:51:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.094 ************************************ 00:07:55.094 START TEST bdev_write_zeroes 00:07:55.094 ************************************ 00:07:55.094 08:51:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:55.094 [2024-11-28 08:51:49.095327] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:55.094 [2024-11-28 08:51:49.095458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74472 ] 00:07:55.354 [2024-11-28 08:51:49.240676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.354 [2024-11-28 08:51:49.284451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.612 Running I/O for 1 seconds... 00:07:56.992 65856.00 IOPS, 257.25 MiB/s 00:07:56.992 Latency(us) 00:07:56.992 [2024-11-28T08:51:51.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.992 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.992 Nvme0n1 : 1.03 9352.33 36.53 0.00 0.00 13656.80 11544.42 24601.21 00:07:56.992 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.992 Nvme1n1p1 : 1.03 9340.86 36.49 0.00 0.00 13651.49 11292.36 24298.73 00:07:56.992 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.992 Nvme1n1p2 : 1.03 9329.38 36.44 0.00 0.00 13624.72 11393.18 23391.31 00:07:56.992 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.992 Nvme2n1 : 1.03 9318.90 36.40 0.00 0.00 13585.83 8670.92 23189.66 00:07:56.992 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.992 Nvme2n2 : 1.03 9308.34 36.36 0.00 0.00 13581.16 8166.79 23290.49 00:07:56.992 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.992 Nvme2n3 : 1.03 9297.93 36.32 0.00 0.00 13573.84 7763.50 23189.66 00:07:56.992 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.992 Nvme3n1 : 1.03 9287.46 36.28 0.00 0.00 13570.16 7259.37 24802.86 00:07:56.992 [2024-11-28T08:51:51.112Z] =================================================================================================================== 00:07:56.992 [2024-11-28T08:51:51.112Z] Total : 65235.20 254.82 0.00 0.00 13606.29 7259.37 24802.86 00:07:56.992 00:07:56.992 real 0m1.917s 00:07:56.992 user 0m1.614s 00:07:56.992 sys 0m0.191s 00:07:56.992 08:51:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.992 08:51:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:56.992 ************************************ 00:07:56.992 END TEST bdev_write_zeroes 00:07:56.992 ************************************ 00:07:56.992 08:51:50 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:56.992 08:51:50 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:56.992 08:51:50 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.992 08:51:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.992 ************************************ 00:07:56.992 START TEST bdev_json_nonenclosed 00:07:56.992 ************************************ 00:07:56.992 08:51:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:56.992 [2024-11-28 08:51:51.054345] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:56.992 [2024-11-28 08:51:51.054462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74509 ] 00:07:57.253 [2024-11-28 08:51:51.204388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.253 [2024-11-28 08:51:51.247725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.253 [2024-11-28 08:51:51.247837] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:57.253 [2024-11-28 08:51:51.247857] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:57.254 [2024-11-28 08:51:51.247872] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:57.254 00:07:57.254 real 0m0.349s 00:07:57.254 user 0m0.136s 00:07:57.254 sys 0m0.109s 00:07:57.254 08:51:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.254 08:51:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:57.254 ************************************ 00:07:57.254 END TEST bdev_json_nonenclosed 00:07:57.254 ************************************ 00:07:57.515 08:51:51 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:57.515 08:51:51 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:57.515 08:51:51 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.515 08:51:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.515 ************************************ 00:07:57.515 START TEST bdev_json_nonarray 00:07:57.515 ************************************ 00:07:57.515 08:51:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:57.515 [2024-11-28 08:51:51.446609] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:57.515 [2024-11-28 08:51:51.446726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74534 ] 00:07:57.515 [2024-11-28 08:51:51.596731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.776 [2024-11-28 08:51:51.638384] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.777 [2024-11-28 08:51:51.638491] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:57.777 [2024-11-28 08:51:51.638513] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:57.777 [2024-11-28 08:51:51.638525] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:57.777 00:07:57.777 real 0m0.350s 00:07:57.777 user 0m0.150s 00:07:57.777 sys 0m0.097s 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:57.777 ************************************ 00:07:57.777 END TEST bdev_json_nonarray 00:07:57.777 ************************************ 00:07:57.777 08:51:51 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:57.777 08:51:51 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:57.777 08:51:51 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:57.777 08:51:51 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.777 08:51:51 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.777 08:51:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.777 ************************************ 00:07:57.777 START TEST bdev_gpt_uuid 00:07:57.777 ************************************ 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74554 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74554 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 74554 ']' 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.777 08:51:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:57.777 [2024-11-28 08:51:51.842923] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:57.777 [2024-11-28 08:51:51.843045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74554 ] 00:07:58.038 [2024-11-28 08:51:51.989499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.038 [2024-11-28 08:51:52.033581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.611 08:51:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.611 08:51:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:58.611 08:51:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:58.611 08:51:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.611 08:51:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:59.184 Some configs were skipped because the RPC state that can call them passed over. 00:07:59.184 08:51:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.184 08:51:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:59.184 08:51:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.184 08:51:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:59.184 { 00:07:59.184 "name": "Nvme1n1p1", 00:07:59.184 "aliases": [ 00:07:59.184 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:59.184 ], 00:07:59.184 "product_name": "GPT Disk", 00:07:59.184 "block_size": 4096, 00:07:59.184 "num_blocks": 655104, 00:07:59.184 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:59.184 "assigned_rate_limits": { 00:07:59.184 "rw_ios_per_sec": 0, 00:07:59.184 "rw_mbytes_per_sec": 0, 00:07:59.184 "r_mbytes_per_sec": 0, 00:07:59.184 "w_mbytes_per_sec": 0 00:07:59.184 }, 00:07:59.184 "claimed": false, 00:07:59.184 "zoned": false, 00:07:59.184 "supported_io_types": { 00:07:59.184 "read": true, 00:07:59.184 "write": true, 00:07:59.184 "unmap": true, 00:07:59.184 "flush": true, 00:07:59.184 "reset": true, 00:07:59.184 "nvme_admin": false, 00:07:59.184 "nvme_io": false, 00:07:59.184 "nvme_io_md": false, 00:07:59.184 "write_zeroes": true, 00:07:59.184 "zcopy": false, 00:07:59.184 "get_zone_info": false, 00:07:59.184 "zone_management": false, 00:07:59.184 "zone_append": false, 00:07:59.184 "compare": true, 00:07:59.184 "compare_and_write": false, 00:07:59.184 "abort": true, 00:07:59.184 "seek_hole": false, 00:07:59.184 "seek_data": false, 00:07:59.184 "copy": true, 00:07:59.184 "nvme_iov_md": false 00:07:59.184 }, 00:07:59.184 "driver_specific": { 00:07:59.184 "gpt": { 00:07:59.184 "base_bdev": "Nvme1n1", 00:07:59.184 "offset_blocks": 256, 00:07:59.184 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:59.184 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:59.184 "partition_name": "SPDK_TEST_first" 00:07:59.184 } 00:07:59.184 } 00:07:59.184 } 00:07:59.184 ]' 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:59.184 { 00:07:59.184 "name": "Nvme1n1p2", 00:07:59.184 "aliases": [ 00:07:59.184 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:59.184 ], 00:07:59.184 "product_name": "GPT Disk", 00:07:59.184 "block_size": 4096, 00:07:59.184 "num_blocks": 655103, 00:07:59.184 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:59.184 "assigned_rate_limits": { 00:07:59.184 "rw_ios_per_sec": 0, 00:07:59.184 "rw_mbytes_per_sec": 0, 00:07:59.184 "r_mbytes_per_sec": 0, 00:07:59.184 "w_mbytes_per_sec": 0 00:07:59.184 }, 00:07:59.184 "claimed": false, 00:07:59.184 "zoned": false, 00:07:59.184 "supported_io_types": { 00:07:59.184 "read": true, 00:07:59.184 "write": true, 00:07:59.184 "unmap": true, 00:07:59.184 "flush": true, 00:07:59.184 "reset": true, 00:07:59.184 "nvme_admin": false, 00:07:59.184 "nvme_io": false, 00:07:59.184 "nvme_io_md": false, 00:07:59.184 "write_zeroes": true, 00:07:59.184 "zcopy": false, 00:07:59.184 "get_zone_info": false, 00:07:59.184 "zone_management": false, 00:07:59.184 "zone_append": false, 00:07:59.184 "compare": true, 00:07:59.184 "compare_and_write": false, 00:07:59.184 "abort": true, 00:07:59.184 "seek_hole": false, 00:07:59.184 "seek_data": false, 00:07:59.184 "copy": true, 00:07:59.184 "nvme_iov_md": false 00:07:59.184 }, 00:07:59.184 "driver_specific": { 00:07:59.184 "gpt": { 00:07:59.184 "base_bdev": "Nvme1n1", 00:07:59.184 "offset_blocks": 655360, 00:07:59.184 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:59.184 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:59.184 "partition_name": "SPDK_TEST_second" 00:07:59.184 } 00:07:59.184 } 00:07:59.184 } 00:07:59.184 ]' 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74554 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 74554 ']' 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 74554 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74554 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.184 killing process with pid 74554 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74554' 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 74554 00:07:59.184 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 74554 00:07:59.756 00:07:59.756 real 0m1.818s 00:07:59.756 user 0m1.926s 00:07:59.756 sys 0m0.389s 00:07:59.756 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.756 08:51:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:59.756 ************************************ 00:07:59.756 END TEST bdev_gpt_uuid 00:07:59.756 ************************************ 00:07:59.756 08:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:59.756 08:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:59.756 08:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:59.756 08:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:59.756 08:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:59.756 08:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:59.756 08:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:59.756 08:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:59.756 08:51:53 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:00.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:00.017 Waiting for block devices as requested 00:08:00.017 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:00.277 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:00.277 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:00.277 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:05.571 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:05.571 08:51:59 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:05.571 08:51:59 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:05.571 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:05.571 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:05.571 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:05.571 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:05.571 08:51:59 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:05.571 00:08:05.571 real 0m52.106s 00:08:05.571 user 1m8.085s 00:08:05.571 sys 0m7.441s 00:08:05.571 08:51:59 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.571 08:51:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:05.571 ************************************ 00:08:05.571 END TEST blockdev_nvme_gpt 00:08:05.571 ************************************ 00:08:05.571 08:51:59 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:05.571 08:51:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.571 08:51:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.571 08:51:59 -- common/autotest_common.sh@10 -- # set +x 00:08:05.571 ************************************ 00:08:05.571 START TEST nvme 00:08:05.571 ************************************ 00:08:05.571 08:51:59 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:05.831 * Looking for test storage... 00:08:05.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:05.831 08:51:59 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:05.831 08:51:59 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:08:05.831 08:51:59 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:05.831 08:51:59 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:05.831 08:51:59 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.831 08:51:59 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.831 08:51:59 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.831 08:51:59 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.831 08:51:59 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.831 08:51:59 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.831 08:51:59 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.831 08:51:59 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.831 08:51:59 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.831 08:51:59 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.831 08:51:59 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.831 08:51:59 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:05.831 08:51:59 nvme -- scripts/common.sh@345 -- # : 1 00:08:05.831 08:51:59 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.831 08:51:59 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.831 08:51:59 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:05.831 08:51:59 nvme -- scripts/common.sh@353 -- # local d=1 00:08:05.831 08:51:59 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.831 08:51:59 nvme -- scripts/common.sh@355 -- # echo 1 00:08:05.831 08:51:59 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.831 08:51:59 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:05.831 08:51:59 nvme -- scripts/common.sh@353 -- # local d=2 00:08:05.831 08:51:59 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.831 08:51:59 nvme -- scripts/common.sh@355 -- # echo 2 00:08:05.831 08:51:59 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.831 08:51:59 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.831 08:51:59 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.831 08:51:59 nvme -- scripts/common.sh@368 -- # return 0 00:08:05.831 08:51:59 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.831 08:51:59 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:05.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.831 --rc genhtml_branch_coverage=1 00:08:05.831 --rc genhtml_function_coverage=1 00:08:05.831 --rc genhtml_legend=1 00:08:05.831 --rc geninfo_all_blocks=1 00:08:05.831 --rc geninfo_unexecuted_blocks=1 00:08:05.831 00:08:05.831 ' 00:08:05.831 08:51:59 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:05.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.831 --rc genhtml_branch_coverage=1 00:08:05.831 --rc genhtml_function_coverage=1 00:08:05.831 --rc genhtml_legend=1 00:08:05.831 --rc geninfo_all_blocks=1 00:08:05.831 --rc geninfo_unexecuted_blocks=1 00:08:05.831 00:08:05.831 ' 00:08:05.831 08:51:59 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:05.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.831 --rc genhtml_branch_coverage=1 00:08:05.831 --rc genhtml_function_coverage=1 00:08:05.831 --rc genhtml_legend=1 00:08:05.831 --rc geninfo_all_blocks=1 00:08:05.831 --rc geninfo_unexecuted_blocks=1 00:08:05.831 00:08:05.831 ' 00:08:05.831 08:51:59 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:05.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.831 --rc genhtml_branch_coverage=1 00:08:05.831 --rc genhtml_function_coverage=1 00:08:05.831 --rc genhtml_legend=1 00:08:05.831 --rc geninfo_all_blocks=1 00:08:05.831 --rc geninfo_unexecuted_blocks=1 00:08:05.831 00:08:05.831 ' 00:08:05.831 08:51:59 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:06.403 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:06.664 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:06.664 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:06.664 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:06.664 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:06.925 08:52:00 nvme -- nvme/nvme.sh@79 -- # uname 00:08:06.925 08:52:00 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:06.925 08:52:00 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:06.925 08:52:00 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:06.925 08:52:00 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:06.925 08:52:00 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:06.925 08:52:00 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:06.925 08:52:00 nvme -- common/autotest_common.sh@1071 -- # stubpid=75179 00:08:06.925 Waiting for stub to ready for secondary processes... 00:08:06.925 08:52:00 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:06.925 08:52:00 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:06.925 08:52:00 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/75179 ]] 00:08:06.925 08:52:00 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:06.925 08:52:00 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:06.925 [2024-11-28 08:52:00.818167] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:06.925 [2024-11-28 08:52:00.818284] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:07.495 [2024-11-28 08:52:01.524919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:07.495 [2024-11-28 08:52:01.544029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.495 [2024-11-28 08:52:01.544135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.495 [2024-11-28 08:52:01.544215] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.495 [2024-11-28 08:52:01.553899] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:07.495 [2024-11-28 08:52:01.553931] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:07.495 [2024-11-28 08:52:01.566094] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:07.495 [2024-11-28 08:52:01.566444] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:07.495 [2024-11-28 08:52:01.567501] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:07.495 [2024-11-28 08:52:01.567776] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:07.495 [2024-11-28 08:52:01.567913] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:07.495 [2024-11-28 08:52:01.569080] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:07.495 [2024-11-28 08:52:01.569254] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:07.495 [2024-11-28 08:52:01.569319] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:07.495 [2024-11-28 08:52:01.570415] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:07.495 [2024-11-28 08:52:01.571014] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:07.495 [2024-11-28 08:52:01.571084] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:07.495 [2024-11-28 08:52:01.571191] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:07.495 [2024-11-28 08:52:01.571259] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:07.756 done. 00:08:07.756 08:52:01 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:07.756 08:52:01 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:07.756 08:52:01 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:07.756 08:52:01 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:07.756 08:52:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.756 08:52:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.756 ************************************ 00:08:07.756 START TEST nvme_reset 00:08:07.756 ************************************ 00:08:07.756 08:52:01 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:08.016 Initializing NVMe Controllers 00:08:08.016 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:08.016 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:08.016 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:08.016 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:08.016 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:08.016 00:08:08.016 real 0m0.196s 00:08:08.016 user 0m0.054s 00:08:08.016 sys 0m0.091s 00:08:08.016 08:52:01 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.016 08:52:01 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 ************************************ 00:08:08.016 END TEST nvme_reset 00:08:08.016 ************************************ 00:08:08.016 08:52:02 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:08.016 08:52:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.016 08:52:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.016 08:52:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 ************************************ 00:08:08.016 START TEST nvme_identify 00:08:08.016 ************************************ 00:08:08.016 08:52:02 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:08.016 08:52:02 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:08.016 08:52:02 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:08.016 08:52:02 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:08.016 08:52:02 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:08.016 08:52:02 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:08.016 08:52:02 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:08.016 08:52:02 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:08.016 08:52:02 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:08.016 08:52:02 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:08.016 08:52:02 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:08.016 08:52:02 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:08.016 08:52:02 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:08.281 [2024-11-28 08:52:02.260696] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 75201 terminated unexpected 00:08:08.281 ===================================================== 00:08:08.281 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:08.281 ===================================================== 00:08:08.281 Controller Capabilities/Features 00:08:08.281 ================================ 00:08:08.281 Vendor ID: 1b36 00:08:08.281 Subsystem Vendor ID: 1af4 00:08:08.281 Serial Number: 12340 00:08:08.281 Model Number: QEMU NVMe Ctrl 00:08:08.281 Firmware Version: 8.0.0 00:08:08.281 Recommended Arb Burst: 6 00:08:08.281 IEEE OUI Identifier: 00 54 52 00:08:08.281 Multi-path I/O 00:08:08.281 May have multiple subsystem ports: No 00:08:08.281 May have multiple controllers: No 00:08:08.281 Associated with SR-IOV VF: No 00:08:08.281 Max Data Transfer Size: 524288 00:08:08.281 Max Number of Namespaces: 256 00:08:08.281 Max Number of I/O Queues: 64 00:08:08.281 NVMe Specification Version (VS): 1.4 00:08:08.281 NVMe Specification Version (Identify): 1.4 00:08:08.281 Maximum Queue Entries: 2048 00:08:08.281 Contiguous Queues Required: Yes 00:08:08.281 Arbitration Mechanisms Supported 00:08:08.281 Weighted Round Robin: Not Supported 00:08:08.281 Vendor Specific: Not Supported 00:08:08.281 Reset Timeout: 7500 ms 00:08:08.281 Doorbell Stride: 4 bytes 00:08:08.281 NVM Subsystem Reset: Not Supported 00:08:08.281 Command Sets Supported 00:08:08.281 NVM Command Set: Supported 00:08:08.281 Boot Partition: Not Supported 00:08:08.281 Memory Page Size Minimum: 4096 bytes 00:08:08.281 Memory Page Size Maximum: 65536 bytes 00:08:08.281 Persistent Memory Region: Not Supported 00:08:08.281 Optional Asynchronous Events Supported 00:08:08.281 Namespace Attribute Notices: Supported 00:08:08.281 Firmware Activation Notices: Not Supported 00:08:08.281 ANA Change Notices: Not Supported 00:08:08.281 PLE Aggregate Log Change Notices: Not Supported 00:08:08.281 LBA Status Info Alert Notices: Not Supported 00:08:08.281 EGE Aggregate Log Change Notices: Not Supported 00:08:08.281 Normal NVM Subsystem Shutdown event: Not Supported 00:08:08.281 Zone Descriptor Change Notices: Not Supported 00:08:08.281 Discovery Log Change Notices: Not Supported 00:08:08.281 Controller Attributes 00:08:08.281 128-bit Host Identifier: Not Supported 00:08:08.281 Non-Operational Permissive Mode: Not Supported 00:08:08.281 NVM Sets: Not Supported 00:08:08.281 Read Recovery Levels: Not Supported 00:08:08.281 Endurance Groups: Not Supported 00:08:08.281 Predictable Latency Mode: Not Supported 00:08:08.281 Traffic Based Keep ALive: Not Supported 00:08:08.281 Namespace Granularity: Not Supported 00:08:08.281 SQ Associations: Not Supported 00:08:08.281 UUID List: Not Supported 00:08:08.281 Multi-Domain Subsystem: Not Supported 00:08:08.281 Fixed Capacity Management: Not Supported 00:08:08.281 Variable Capacity Management: Not Supported 00:08:08.281 Delete Endurance Group: Not Supported 00:08:08.281 Delete NVM Set: Not Supported 00:08:08.281 Extended LBA Formats Supported: Supported 00:08:08.281 Flexible Data Placement Supported: Not Supported 00:08:08.281 00:08:08.281 Controller Memory Buffer Support 00:08:08.281 ================================ 00:08:08.281 Supported: No 00:08:08.281 00:08:08.281 Persistent Memory Region Support 00:08:08.281 ================================ 00:08:08.281 Supported: No 00:08:08.281 00:08:08.281 Admin Command Set Attributes 00:08:08.281 ============================ 00:08:08.282 Security Send/Receive: Not Supported 00:08:08.282 Format NVM: Supported 00:08:08.282 Firmware Activate/Download: Not Supported 00:08:08.282 Namespace Management: Supported 00:08:08.282 Device Self-Test: Not Supported 00:08:08.282 Directives: Supported 00:08:08.282 NVMe-MI: Not Supported 00:08:08.282 Virtualization Management: Not Supported 00:08:08.282 Doorbell Buffer Config: Supported 00:08:08.282 Get LBA Status Capability: Not Supported 00:08:08.282 Command & Feature Lockdown Capability: Not Supported 00:08:08.282 Abort Command Limit: 4 00:08:08.282 Async Event Request Limit: 4 00:08:08.282 Number of Firmware Slots: N/A 00:08:08.282 Firmware Slot 1 Read-Only: N/A 00:08:08.282 Firmware Activation Without Reset: N/A 00:08:08.282 Multiple Update Detection Support: N/A 00:08:08.282 Firmware Update Granularity: No Information Provided 00:08:08.282 Per-Namespace SMART Log: Yes 00:08:08.282 Asymmetric Namespace Access Log Page: Not Supported 00:08:08.282 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:08.282 Command Effects Log Page: Supported 00:08:08.282 Get Log Page Extended Data: Supported 00:08:08.282 Telemetry Log Pages: Not Supported 00:08:08.282 Persistent Event Log Pages: Not Supported 00:08:08.282 Supported Log Pages Log Page: May Support 00:08:08.282 Commands Supported & Effects Log Page: Not Supported 00:08:08.282 Feature Identifiers & Effects Log Page:May Support 00:08:08.282 NVMe-MI Commands & Effects Log Page: May Support 00:08:08.282 Data Area 4 for Telemetry Log: Not Supported 00:08:08.282 Error Log Page Entries Supported: 1 00:08:08.282 Keep Alive: Not Supported 00:08:08.282 00:08:08.282 NVM Command Set Attributes 00:08:08.282 ========================== 00:08:08.282 Submission Queue Entry Size 00:08:08.282 Max: 64 00:08:08.282 Min: 64 00:08:08.282 Completion Queue Entry Size 00:08:08.282 Max: 16 00:08:08.282 Min: 16 00:08:08.282 Number of Namespaces: 256 00:08:08.282 Compare Command: Supported 00:08:08.282 Write Uncorrectable Command: Not Supported 00:08:08.282 Dataset Management Command: Supported 00:08:08.282 Write Zeroes Command: Supported 00:08:08.282 Set Features Save Field: Supported 00:08:08.282 Reservations: Not Supported 00:08:08.282 Timestamp: Supported 00:08:08.282 Copy: Supported 00:08:08.282 Volatile Write Cache: Present 00:08:08.282 Atomic Write Unit (Normal): 1 00:08:08.282 Atomic Write Unit (PFail): 1 00:08:08.282 Atomic Compare & Write Unit: 1 00:08:08.282 Fused Compare & Write: Not Supported 00:08:08.282 Scatter-Gather List 00:08:08.282 SGL Command Set: Supported 00:08:08.282 SGL Keyed: Not Supported 00:08:08.282 SGL Bit Bucket Descriptor: Not Supported 00:08:08.282 SGL Metadata Pointer: Not Supported 00:08:08.282 Oversized SGL: Not Supported 00:08:08.282 SGL Metadata Address: Not Supported 00:08:08.282 SGL Offset: Not Supported 00:08:08.282 Transport SGL Data Block: Not Supported 00:08:08.282 Replay Protected Memory Block: Not Supported 00:08:08.282 00:08:08.282 Firmware Slot Information 00:08:08.282 ========================= 00:08:08.282 Active slot: 1 00:08:08.282 Slot 1 Firmware Revision: 1.0 00:08:08.282 00:08:08.282 00:08:08.282 Commands Supported and Effects 00:08:08.282 ============================== 00:08:08.282 Admin Commands 00:08:08.282 -------------- 00:08:08.282 Delete I/O Submission Queue (00h): Supported 00:08:08.282 Create I/O Submission Queue (01h): Supported 00:08:08.282 Get Log Page (02h): Supported 00:08:08.282 Delete I/O Completion Queue (04h): Supported 00:08:08.282 Create I/O Completion Queue (05h): Supported 00:08:08.282 Identify (06h): Supported 00:08:08.282 Abort (08h): Supported 00:08:08.282 Set Features (09h): Supported 00:08:08.282 Get Features (0Ah): Supported 00:08:08.282 Asynchronous Event Request (0Ch): Supported 00:08:08.282 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:08.282 Directive Send (19h): Supported 00:08:08.282 Directive Receive (1Ah): Supported 00:08:08.282 Virtualization Management (1Ch): Supported 00:08:08.282 Doorbell Buffer Config (7Ch): Supported 00:08:08.282 Format NVM (80h): Supported LBA-Change 00:08:08.282 I/O Commands 00:08:08.282 ------------ 00:08:08.282 Flush (00h): Supported LBA-Change 00:08:08.282 Write (01h): Supported LBA-Change 00:08:08.282 Read (02h): Supported 00:08:08.282 Compare (05h): Supported 00:08:08.282 Write Zeroes (08h): Supported LBA-Change 00:08:08.282 Dataset Management (09h): Supported LBA-Change 00:08:08.282 Unknown (0Ch): Supported 00:08:08.282 Unknown (12h): Supported 00:08:08.282 Copy (19h): Supported LBA-Change 00:08:08.282 Unknown (1Dh): Supported LBA-Change 00:08:08.282 00:08:08.282 Error Log 00:08:08.282 ========= 00:08:08.282 00:08:08.282 Arbitration 00:08:08.282 =========== 00:08:08.282 Arbitration Burst: no limit 00:08:08.282 00:08:08.282 Power Management 00:08:08.282 ================ 00:08:08.282 Number of Power States: 1 00:08:08.282 Current Power State: Power State #0 00:08:08.282 Power State #0: 00:08:08.282 Max Power: 25.00 W 00:08:08.282 Non-Operational State: Operational 00:08:08.282 Entry Latency: 16 microseconds 00:08:08.282 Exit Latency: 4 microseconds 00:08:08.282 Relative Read Throughput: 0 00:08:08.282 Relative Read Latency: 0 00:08:08.282 Relative Write Throughput: 0 00:08:08.282 Relative Write Latency: 0 00:08:08.282 Idle Power[2024-11-28 08:52:02.262172] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 75201 terminated unexpected 00:08:08.282 : Not Reported 00:08:08.282 Active Power: Not Reported 00:08:08.282 Non-Operational Permissive Mode: Not Supported 00:08:08.282 00:08:08.282 Health Information 00:08:08.282 ================== 00:08:08.282 Critical Warnings: 00:08:08.282 Available Spare Space: OK 00:08:08.282 Temperature: OK 00:08:08.282 Device Reliability: OK 00:08:08.282 Read Only: No 00:08:08.282 Volatile Memory Backup: OK 00:08:08.282 Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.282 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:08.282 Available Spare: 0% 00:08:08.282 Available Spare Threshold: 0% 00:08:08.282 Life Percentage Used: 0% 00:08:08.282 Data Units Read: 662 00:08:08.282 Data Units Written: 590 00:08:08.282 Host Read Commands: 37588 00:08:08.282 Host Write Commands: 37374 00:08:08.282 Controller Busy Time: 0 minutes 00:08:08.282 Power Cycles: 0 00:08:08.282 Power On Hours: 0 hours 00:08:08.282 Unsafe Shutdowns: 0 00:08:08.282 Unrecoverable Media Errors: 0 00:08:08.282 Lifetime Error Log Entries: 0 00:08:08.282 Warning Temperature Time: 0 minutes 00:08:08.282 Critical Temperature Time: 0 minutes 00:08:08.282 00:08:08.282 Number of Queues 00:08:08.282 ================ 00:08:08.282 Number of I/O Submission Queues: 64 00:08:08.282 Number of I/O Completion Queues: 64 00:08:08.282 00:08:08.282 ZNS Specific Controller Data 00:08:08.282 ============================ 00:08:08.282 Zone Append Size Limit: 0 00:08:08.282 00:08:08.282 00:08:08.282 Active Namespaces 00:08:08.282 ================= 00:08:08.282 Namespace ID:1 00:08:08.282 Error Recovery Timeout: Unlimited 00:08:08.282 Command Set Identifier: NVM (00h) 00:08:08.282 Deallocate: Supported 00:08:08.282 Deallocated/Unwritten Error: Supported 00:08:08.282 Deallocated Read Value: All 0x00 00:08:08.282 Deallocate in Write Zeroes: Not Supported 00:08:08.282 Deallocated Guard Field: 0xFFFF 00:08:08.282 Flush: Supported 00:08:08.282 Reservation: Not Supported 00:08:08.282 Metadata Transferred as: Separate Metadata Buffer 00:08:08.282 Namespace Sharing Capabilities: Private 00:08:08.282 Size (in LBAs): 1548666 (5GiB) 00:08:08.282 Capacity (in LBAs): 1548666 (5GiB) 00:08:08.282 Utilization (in LBAs): 1548666 (5GiB) 00:08:08.282 Thin Provisioning: Not Supported 00:08:08.282 Per-NS Atomic Units: No 00:08:08.282 Maximum Single Source Range Length: 128 00:08:08.282 Maximum Copy Length: 128 00:08:08.282 Maximum Source Range Count: 128 00:08:08.282 NGUID/EUI64 Never Reused: No 00:08:08.282 Namespace Write Protected: No 00:08:08.282 Number of LBA Formats: 8 00:08:08.282 Current LBA Format: LBA Format #07 00:08:08.282 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:08.282 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:08.282 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:08.282 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:08.282 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:08.282 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:08.282 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:08.282 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:08.282 00:08:08.282 NVM Specific Namespace Data 00:08:08.282 =========================== 00:08:08.282 Logical Block Storage Tag Mask: 0 00:08:08.282 Protection Information Capabilities: 00:08:08.282 16b Guard Protection Information Storage Tag Support: No 00:08:08.283 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:08.283 Storage Tag Check Read Support: No 00:08:08.283 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.283 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.283 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.283 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.283 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.283 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.283 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.283 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.283 ===================================================== 00:08:08.283 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:08.283 ===================================================== 00:08:08.283 Controller Capabilities/Features 00:08:08.283 ================================ 00:08:08.283 Vendor ID: 1b36 00:08:08.283 Subsystem Vendor ID: 1af4 00:08:08.283 Serial Number: 12341 00:08:08.283 Model Number: QEMU NVMe Ctrl 00:08:08.283 Firmware Version: 8.0.0 00:08:08.283 Recommended Arb Burst: 6 00:08:08.283 IEEE OUI Identifier: 00 54 52 00:08:08.283 Multi-path I/O 00:08:08.283 May have multiple subsystem ports: No 00:08:08.283 May have multiple controllers: No 00:08:08.283 Associated with SR-IOV VF: No 00:08:08.283 Max Data Transfer Size: 524288 00:08:08.283 Max Number of Namespaces: 256 00:08:08.283 Max Number of I/O Queues: 64 00:08:08.283 NVMe Specification Version (VS): 1.4 00:08:08.283 NVMe Specification Version (Identify): 1.4 00:08:08.283 Maximum Queue Entries: 2048 00:08:08.283 Contiguous Queues Required: Yes 00:08:08.283 Arbitration Mechanisms Supported 00:08:08.283 Weighted Round Robin: Not Supported 00:08:08.283 Vendor Specific: Not Supported 00:08:08.283 Reset Timeout: 7500 ms 00:08:08.283 Doorbell Stride: 4 bytes 00:08:08.283 NVM Subsystem Reset: Not Supported 00:08:08.283 Command Sets Supported 00:08:08.283 NVM Command Set: Supported 00:08:08.283 Boot Partition: Not Supported 00:08:08.283 Memory Page Size Minimum: 4096 bytes 00:08:08.283 Memory Page Size Maximum: 65536 bytes 00:08:08.283 Persistent Memory Region: Not Supported 00:08:08.283 Optional Asynchronous Events Supported 00:08:08.283 Namespace Attribute Notices: Supported 00:08:08.283 Firmware Activation Notices: Not Supported 00:08:08.283 ANA Change Notices: Not Supported 00:08:08.283 PLE Aggregate Log Change Notices: Not Supported 00:08:08.283 LBA Status Info Alert Notices: Not Supported 00:08:08.283 EGE Aggregate Log Change Notices: Not Supported 00:08:08.283 Normal NVM Subsystem Shutdown event: Not Supported 00:08:08.283 Zone Descriptor Change Notices: Not Supported 00:08:08.283 Discovery Log Change Notices: Not Supported 00:08:08.283 Controller Attributes 00:08:08.283 128-bit Host Identifier: Not Supported 00:08:08.283 Non-Operational Permissive Mode: Not Supported 00:08:08.283 NVM Sets: Not Supported 00:08:08.283 Read Recovery Levels: Not Supported 00:08:08.283 Endurance Groups: Not Supported 00:08:08.283 Predictable Latency Mode: Not Supported 00:08:08.283 Traffic Based Keep ALive: Not Supported 00:08:08.283 Namespace Granularity: Not Supported 00:08:08.283 SQ Associations: Not Supported 00:08:08.283 UUID List: Not Supported 00:08:08.283 Multi-Domain Subsystem: Not Supported 00:08:08.283 Fixed Capacity Management: Not Supported 00:08:08.283 Variable Capacity Management: Not Supported 00:08:08.283 Delete Endurance Group: Not Supported 00:08:08.283 Delete NVM Set: Not Supported 00:08:08.283 Extended LBA Formats Supported: Supported 00:08:08.283 Flexible Data Placement Supported: Not Supported 00:08:08.283 00:08:08.283 Controller Memory Buffer Support 00:08:08.283 ================================ 00:08:08.283 Supported: No 00:08:08.283 00:08:08.283 Persistent Memory Region Support 00:08:08.283 ================================ 00:08:08.283 Supported: No 00:08:08.283 00:08:08.283 Admin Command Set Attributes 00:08:08.283 ============================ 00:08:08.283 Security Send/Receive: Not Supported 00:08:08.283 Format NVM: Supported 00:08:08.283 Firmware Activate/Download: Not Supported 00:08:08.283 Namespace Management: Supported 00:08:08.283 Device Self-Test: Not Supported 00:08:08.283 Directives: Supported 00:08:08.283 NVMe-MI: Not Supported 00:08:08.283 Virtualization Management: Not Supported 00:08:08.283 Doorbell Buffer Config: Supported 00:08:08.283 Get LBA Status Capability: Not Supported 00:08:08.283 Command & Feature Lockdown Capability: Not Supported 00:08:08.283 Abort Command Limit: 4 00:08:08.283 Async Event Request Limit: 4 00:08:08.283 Number of Firmware Slots: N/A 00:08:08.283 Firmware Slot 1 Read-Only: N/A 00:08:08.283 Firmware Activation Without Reset: N/A 00:08:08.283 Multiple Update Detection Support: N/A 00:08:08.283 Firmware Update Granularity: No Information Provided 00:08:08.283 Per-Namespace SMART Log: Yes 00:08:08.283 Asymmetric Namespace Access Log Page: Not Supported 00:08:08.283 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:08.283 Command Effects Log Page: Supported 00:08:08.283 Get Log Page Extended Data: Supported 00:08:08.283 Telemetry Log Pages: Not Supported 00:08:08.283 Persistent Event Log Pages: Not Supported 00:08:08.283 Supported Log Pages Log Page: May Support 00:08:08.283 Commands Supported & Effects Log Page: Not Supported 00:08:08.283 Feature Identifiers & Effects Log Page:May Support 00:08:08.283 NVMe-MI Commands & Effects Log Page: May Support 00:08:08.283 Data Area 4 for Telemetry Log: Not Supported 00:08:08.283 Error Log Page Entries Supported: 1 00:08:08.283 Keep Alive: Not Supported 00:08:08.283 00:08:08.283 NVM Command Set Attributes 00:08:08.283 ========================== 00:08:08.283 Submission Queue Entry Size 00:08:08.283 Max: 64 00:08:08.283 Min: 64 00:08:08.283 Completion Queue Entry Size 00:08:08.283 Max: 16 00:08:08.283 Min: 16 00:08:08.283 Number of Namespaces: 256 00:08:08.283 Compare Command: Supported 00:08:08.283 Write Uncorrectable Command: Not Supported 00:08:08.283 Dataset Management Command: Supported 00:08:08.283 Write Zeroes Command: Supported 00:08:08.283 Set Features Save Field: Supported 00:08:08.283 Reservations: Not Supported 00:08:08.283 Timestamp: Supported 00:08:08.283 Copy: Supported 00:08:08.283 Volatile Write Cache: Present 00:08:08.283 Atomic Write Unit (Normal): 1 00:08:08.283 Atomic Write Unit (PFail): 1 00:08:08.283 Atomic Compare & Write Unit: 1 00:08:08.283 Fused Compare & Write: Not Supported 00:08:08.283 Scatter-Gather List 00:08:08.283 SGL Command Set: Supported 00:08:08.283 SGL Keyed: Not Supported 00:08:08.283 SGL Bit Bucket Descriptor: Not Supported 00:08:08.283 SGL Metadata Pointer: Not Supported 00:08:08.283 Oversized SGL: Not Supported 00:08:08.283 SGL Metadata Address: Not Supported 00:08:08.283 SGL Offset: Not Supported 00:08:08.283 Transport SGL Data Block: Not Supported 00:08:08.283 Replay Protected Memory Block: Not Supported 00:08:08.283 00:08:08.283 Firmware Slot Information 00:08:08.283 ========================= 00:08:08.283 Active slot: 1 00:08:08.283 Slot 1 Firmware Revision: 1.0 00:08:08.283 00:08:08.283 00:08:08.283 Commands Supported and Effects 00:08:08.283 ============================== 00:08:08.283 Admin Commands 00:08:08.283 -------------- 00:08:08.283 Delete I/O Submission Queue (00h): Supported 00:08:08.283 Create I/O Submission Queue (01h): Supported 00:08:08.283 Get Log Page (02h): Supported 00:08:08.283 Delete I/O Completion Queue (04h): Supported 00:08:08.283 Create I/O Completion Queue (05h): Supported 00:08:08.283 Identify (06h): Supported 00:08:08.283 Abort (08h): Supported 00:08:08.283 Set Features (09h): Supported 00:08:08.283 Get Features (0Ah): Supported 00:08:08.283 Asynchronous Event Request (0Ch): Supported 00:08:08.283 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:08.283 Directive Send (19h): Supported 00:08:08.283 Directive Receive (1Ah): Supported 00:08:08.283 Virtualization Management (1Ch): Supported 00:08:08.283 Doorbell Buffer Config (7Ch): Supported 00:08:08.283 Format NVM (80h): Supported LBA-Change 00:08:08.283 I/O Commands 00:08:08.283 ------------ 00:08:08.283 Flush (00h): Supported LBA-Change 00:08:08.283 Write (01h): Supported LBA-Change 00:08:08.283 Read (02h): Supported 00:08:08.283 Compare (05h): Supported 00:08:08.283 Write Zeroes (08h): Supported LBA-Change 00:08:08.283 Dataset Management (09h): Supported LBA-Change 00:08:08.283 Unknown (0Ch): Supported 00:08:08.283 Unknown (12h): Supported 00:08:08.283 Copy (19h): Supported LBA-Change 00:08:08.283 Unknown (1Dh): Supported LBA-Change 00:08:08.283 00:08:08.283 Error Log 00:08:08.283 ========= 00:08:08.283 00:08:08.283 Arbitration 00:08:08.283 =========== 00:08:08.283 Arbitration Burst: no limit 00:08:08.283 00:08:08.283 Power Management 00:08:08.284 ================ 00:08:08.284 Number of Power States: 1 00:08:08.284 Current Power State: Power State #0 00:08:08.284 Power State #0: 00:08:08.284 Max Power: 25.00 W 00:08:08.284 Non-Operational State: Operational 00:08:08.284 Entry Latency: 16 microseconds 00:08:08.284 Exit Latency: 4 microseconds 00:08:08.284 Relative Read Throughput: 0 00:08:08.284 Relative Read Latency: 0 00:08:08.284 Relative Write Throughput: 0 00:08:08.284 Relative Write Latency: 0 00:08:08.284 Idle Power: Not Reported 00:08:08.284 Active Power: Not Reported 00:08:08.284 Non-Operational Permissive Mode: Not Supported 00:08:08.284 00:08:08.284 Health Information 00:08:08.284 ================== 00:08:08.284 Critical Warnings: 00:08:08.284 Available Spare Space: OK 00:08:08.284 Temperature: [2024-11-28 08:52:02.263130] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 75201 terminated unexpected 00:08:08.284 OK 00:08:08.284 Device Reliability: OK 00:08:08.284 Read Only: No 00:08:08.284 Volatile Memory Backup: OK 00:08:08.284 Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.284 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:08.284 Available Spare: 0% 00:08:08.284 Available Spare Threshold: 0% 00:08:08.284 Life Percentage Used: 0% 00:08:08.284 Data Units Read: 1060 00:08:08.284 Data Units Written: 925 00:08:08.284 Host Read Commands: 56957 00:08:08.284 Host Write Commands: 55715 00:08:08.284 Controller Busy Time: 0 minutes 00:08:08.284 Power Cycles: 0 00:08:08.284 Power On Hours: 0 hours 00:08:08.284 Unsafe Shutdowns: 0 00:08:08.284 Unrecoverable Media Errors: 0 00:08:08.284 Lifetime Error Log Entries: 0 00:08:08.284 Warning Temperature Time: 0 minutes 00:08:08.284 Critical Temperature Time: 0 minutes 00:08:08.284 00:08:08.284 Number of Queues 00:08:08.284 ================ 00:08:08.284 Number of I/O Submission Queues: 64 00:08:08.284 Number of I/O Completion Queues: 64 00:08:08.284 00:08:08.284 ZNS Specific Controller Data 00:08:08.284 ============================ 00:08:08.284 Zone Append Size Limit: 0 00:08:08.284 00:08:08.284 00:08:08.284 Active Namespaces 00:08:08.284 ================= 00:08:08.284 Namespace ID:1 00:08:08.284 Error Recovery Timeout: Unlimited 00:08:08.284 Command Set Identifier: NVM (00h) 00:08:08.284 Deallocate: Supported 00:08:08.284 Deallocated/Unwritten Error: Supported 00:08:08.284 Deallocated Read Value: All 0x00 00:08:08.284 Deallocate in Write Zeroes: Not Supported 00:08:08.284 Deallocated Guard Field: 0xFFFF 00:08:08.284 Flush: Supported 00:08:08.284 Reservation: Not Supported 00:08:08.284 Namespace Sharing Capabilities: Private 00:08:08.284 Size (in LBAs): 1310720 (5GiB) 00:08:08.284 Capacity (in LBAs): 1310720 (5GiB) 00:08:08.284 Utilization (in LBAs): 1310720 (5GiB) 00:08:08.284 Thin Provisioning: Not Supported 00:08:08.284 Per-NS Atomic Units: No 00:08:08.284 Maximum Single Source Range Length: 128 00:08:08.284 Maximum Copy Length: 128 00:08:08.284 Maximum Source Range Count: 128 00:08:08.284 NGUID/EUI64 Never Reused: No 00:08:08.284 Namespace Write Protected: No 00:08:08.284 Number of LBA Formats: 8 00:08:08.284 Current LBA Format: LBA Format #04 00:08:08.284 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:08.284 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:08.284 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:08.284 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:08.284 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:08.284 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:08.284 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:08.284 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:08.284 00:08:08.284 NVM Specific Namespace Data 00:08:08.284 =========================== 00:08:08.284 Logical Block Storage Tag Mask: 0 00:08:08.284 Protection Information Capabilities: 00:08:08.284 16b Guard Protection Information Storage Tag Support: No 00:08:08.284 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:08.284 Storage Tag Check Read Support: No 00:08:08.284 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.284 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.284 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.284 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.284 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.284 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.284 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.284 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.284 ===================================================== 00:08:08.284 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:08.284 ===================================================== 00:08:08.284 Controller Capabilities/Features 00:08:08.284 ================================ 00:08:08.284 Vendor ID: 1b36 00:08:08.284 Subsystem Vendor ID: 1af4 00:08:08.284 Serial Number: 12343 00:08:08.284 Model Number: QEMU NVMe Ctrl 00:08:08.284 Firmware Version: 8.0.0 00:08:08.284 Recommended Arb Burst: 6 00:08:08.284 IEEE OUI Identifier: 00 54 52 00:08:08.284 Multi-path I/O 00:08:08.284 May have multiple subsystem ports: No 00:08:08.284 May have multiple controllers: Yes 00:08:08.284 Associated with SR-IOV VF: No 00:08:08.284 Max Data Transfer Size: 524288 00:08:08.284 Max Number of Namespaces: 256 00:08:08.284 Max Number of I/O Queues: 64 00:08:08.284 NVMe Specification Version (VS): 1.4 00:08:08.284 NVMe Specification Version (Identify): 1.4 00:08:08.284 Maximum Queue Entries: 2048 00:08:08.284 Contiguous Queues Required: Yes 00:08:08.284 Arbitration Mechanisms Supported 00:08:08.284 Weighted Round Robin: Not Supported 00:08:08.284 Vendor Specific: Not Supported 00:08:08.284 Reset Timeout: 7500 ms 00:08:08.284 Doorbell Stride: 4 bytes 00:08:08.284 NVM Subsystem Reset: Not Supported 00:08:08.284 Command Sets Supported 00:08:08.284 NVM Command Set: Supported 00:08:08.284 Boot Partition: Not Supported 00:08:08.284 Memory Page Size Minimum: 4096 bytes 00:08:08.284 Memory Page Size Maximum: 65536 bytes 00:08:08.284 Persistent Memory Region: Not Supported 00:08:08.284 Optional Asynchronous Events Supported 00:08:08.284 Namespace Attribute Notices: Supported 00:08:08.284 Firmware Activation Notices: Not Supported 00:08:08.284 ANA Change Notices: Not Supported 00:08:08.284 PLE Aggregate Log Change Notices: Not Supported 00:08:08.284 LBA Status Info Alert Notices: Not Supported 00:08:08.284 EGE Aggregate Log Change Notices: Not Supported 00:08:08.284 Normal NVM Subsystem Shutdown event: Not Supported 00:08:08.284 Zone Descriptor Change Notices: Not Supported 00:08:08.284 Discovery Log Change Notices: Not Supported 00:08:08.284 Controller Attributes 00:08:08.284 128-bit Host Identifier: Not Supported 00:08:08.284 Non-Operational Permissive Mode: Not Supported 00:08:08.284 NVM Sets: Not Supported 00:08:08.284 Read Recovery Levels: Not Supported 00:08:08.284 Endurance Groups: Supported 00:08:08.284 Predictable Latency Mode: Not Supported 00:08:08.284 Traffic Based Keep ALive: Not Supported 00:08:08.284 Namespace Granularity: Not Supported 00:08:08.284 SQ Associations: Not Supported 00:08:08.284 UUID List: Not Supported 00:08:08.284 Multi-Domain Subsystem: Not Supported 00:08:08.284 Fixed Capacity Management: Not Supported 00:08:08.284 Variable Capacity Management: Not Supported 00:08:08.284 Delete Endurance Group: Not Supported 00:08:08.284 Delete NVM Set: Not Supported 00:08:08.284 Extended LBA Formats Supported: Supported 00:08:08.284 Flexible Data Placement Supported: Supported 00:08:08.284 00:08:08.284 Controller Memory Buffer Support 00:08:08.284 ================================ 00:08:08.284 Supported: No 00:08:08.284 00:08:08.284 Persistent Memory Region Support 00:08:08.284 ================================ 00:08:08.284 Supported: No 00:08:08.284 00:08:08.284 Admin Command Set Attributes 00:08:08.284 ============================ 00:08:08.284 Security Send/Receive: Not Supported 00:08:08.284 Format NVM: Supported 00:08:08.284 Firmware Activate/Download: Not Supported 00:08:08.284 Namespace Management: Supported 00:08:08.284 Device Self-Test: Not Supported 00:08:08.284 Directives: Supported 00:08:08.284 NVMe-MI: Not Supported 00:08:08.284 Virtualization Management: Not Supported 00:08:08.284 Doorbell Buffer Config: Supported 00:08:08.284 Get LBA Status Capability: Not Supported 00:08:08.284 Command & Feature Lockdown Capability: Not Supported 00:08:08.284 Abort Command Limit: 4 00:08:08.284 Async Event Request Limit: 4 00:08:08.284 Number of Firmware Slots: N/A 00:08:08.284 Firmware Slot 1 Read-Only: N/A 00:08:08.284 Firmware Activation Without Reset: N/A 00:08:08.284 Multiple Update Detection Support: N/A 00:08:08.284 Firmware Update Granularity: No Information Provided 00:08:08.284 Per-Namespace SMART Log: Yes 00:08:08.284 Asymmetric Namespace Access Log Page: Not Supported 00:08:08.285 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:08.285 Command Effects Log Page: Supported 00:08:08.285 Get Log Page Extended Data: Supported 00:08:08.285 Telemetry Log Pages: Not Supported 00:08:08.285 Persistent Event Log Pages: Not Supported 00:08:08.285 Supported Log Pages Log Page: May Support 00:08:08.285 Commands Supported & Effects Log Page: Not Supported 00:08:08.285 Feature Identifiers & Effects Log Page:May Support 00:08:08.285 NVMe-MI Commands & Effects Log Page: May Support 00:08:08.285 Data Area 4 for Telemetry Log: Not Supported 00:08:08.285 Error Log Page Entries Supported: 1 00:08:08.285 Keep Alive: Not Supported 00:08:08.285 00:08:08.285 NVM Command Set Attributes 00:08:08.285 ========================== 00:08:08.285 Submission Queue Entry Size 00:08:08.285 Max: 64 00:08:08.285 Min: 64 00:08:08.285 Completion Queue Entry Size 00:08:08.285 Max: 16 00:08:08.285 Min: 16 00:08:08.285 Number of Namespaces: 256 00:08:08.285 Compare Command: Supported 00:08:08.285 Write Uncorrectable Command: Not Supported 00:08:08.285 Dataset Management Command: Supported 00:08:08.285 Write Zeroes Command: Supported 00:08:08.285 Set Features Save Field: Supported 00:08:08.285 Reservations: Not Supported 00:08:08.285 Timestamp: Supported 00:08:08.285 Copy: Supported 00:08:08.285 Volatile Write Cache: Present 00:08:08.285 Atomic Write Unit (Normal): 1 00:08:08.285 Atomic Write Unit (PFail): 1 00:08:08.285 Atomic Compare & Write Unit: 1 00:08:08.285 Fused Compare & Write: Not Supported 00:08:08.285 Scatter-Gather List 00:08:08.285 SGL Command Set: Supported 00:08:08.285 SGL Keyed: Not Supported 00:08:08.285 SGL Bit Bucket Descriptor: Not Supported 00:08:08.285 SGL Metadata Pointer: Not Supported 00:08:08.285 Oversized SGL: Not Supported 00:08:08.285 SGL Metadata Address: Not Supported 00:08:08.285 SGL Offset: Not Supported 00:08:08.285 Transport SGL Data Block: Not Supported 00:08:08.285 Replay Protected Memory Block: Not Supported 00:08:08.285 00:08:08.285 Firmware Slot Information 00:08:08.285 ========================= 00:08:08.285 Active slot: 1 00:08:08.285 Slot 1 Firmware Revision: 1.0 00:08:08.285 00:08:08.285 00:08:08.285 Commands Supported and Effects 00:08:08.285 ============================== 00:08:08.285 Admin Commands 00:08:08.285 -------------- 00:08:08.285 Delete I/O Submission Queue (00h): Supported 00:08:08.285 Create I/O Submission Queue (01h): Supported 00:08:08.285 Get Log Page (02h): Supported 00:08:08.285 Delete I/O Completion Queue (04h): Supported 00:08:08.285 Create I/O Completion Queue (05h): Supported 00:08:08.285 Identify (06h): Supported 00:08:08.285 Abort (08h): Supported 00:08:08.285 Set Features (09h): Supported 00:08:08.285 Get Features (0Ah): Supported 00:08:08.285 Asynchronous Event Request (0Ch): Supported 00:08:08.285 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:08.285 Directive Send (19h): Supported 00:08:08.285 Directive Receive (1Ah): Supported 00:08:08.285 Virtualization Management (1Ch): Supported 00:08:08.285 Doorbell Buffer Config (7Ch): Supported 00:08:08.285 Format NVM (80h): Supported LBA-Change 00:08:08.285 I/O Commands 00:08:08.285 ------------ 00:08:08.285 Flush (00h): Supported LBA-Change 00:08:08.285 Write (01h): Supported LBA-Change 00:08:08.285 Read (02h): Supported 00:08:08.285 Compare (05h): Supported 00:08:08.285 Write Zeroes (08h): Supported LBA-Change 00:08:08.285 Dataset Management (09h): Supported LBA-Change 00:08:08.285 Unknown (0Ch): Supported 00:08:08.285 Unknown (12h): Supported 00:08:08.285 Copy (19h): Supported LBA-Change 00:08:08.285 Unknown (1Dh): Supported LBA-Change 00:08:08.285 00:08:08.285 Error Log 00:08:08.285 ========= 00:08:08.285 00:08:08.285 Arbitration 00:08:08.285 =========== 00:08:08.285 Arbitration Burst: no limit 00:08:08.285 00:08:08.285 Power Management 00:08:08.285 ================ 00:08:08.285 Number of Power States: 1 00:08:08.285 Current Power State: Power State #0 00:08:08.285 Power State #0: 00:08:08.285 Max Power: 25.00 W 00:08:08.285 Non-Operational State: Operational 00:08:08.285 Entry Latency: 16 microseconds 00:08:08.285 Exit Latency: 4 microseconds 00:08:08.285 Relative Read Throughput: 0 00:08:08.285 Relative Read Latency: 0 00:08:08.285 Relative Write Throughput: 0 00:08:08.285 Relative Write Latency: 0 00:08:08.285 Idle Power: Not Reported 00:08:08.285 Active Power: Not Reported 00:08:08.285 Non-Operational Permissive Mode: Not Supported 00:08:08.285 00:08:08.285 Health Information 00:08:08.285 ================== 00:08:08.285 Critical Warnings: 00:08:08.285 Available Spare Space: OK 00:08:08.285 Temperature: OK 00:08:08.285 Device Reliability: OK 00:08:08.285 Read Only: No 00:08:08.285 Volatile Memory Backup: OK 00:08:08.285 Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.285 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:08.285 Available Spare: 0% 00:08:08.285 Available Spare Threshold: 0% 00:08:08.285 Life Percentage Used: 0% 00:08:08.285 Data Units Read: 947 00:08:08.285 Data Units Written: 876 00:08:08.285 Host Read Commands: 40176 00:08:08.285 Host Write Commands: 39600 00:08:08.285 Controller Busy Time: 0 minutes 00:08:08.285 Power Cycles: 0 00:08:08.285 Power On Hours: 0 hours 00:08:08.285 Unsafe Shutdowns: 0 00:08:08.285 Unrecoverable Media Errors: 0 00:08:08.285 Lifetime Error Log Entries: 0 00:08:08.285 Warning Temperature Time: 0 minutes 00:08:08.285 Critical Temperature Time: 0 minutes 00:08:08.285 00:08:08.285 Number of Queues 00:08:08.285 ================ 00:08:08.285 Number of I/O Submission Queues: 64 00:08:08.285 Number of I/O Completion Queues: 64 00:08:08.285 00:08:08.285 ZNS Specific Controller Data 00:08:08.285 ============================ 00:08:08.285 Zone Append Size Limit: 0 00:08:08.285 00:08:08.285 00:08:08.285 Active Namespaces 00:08:08.285 ================= 00:08:08.285 Namespace ID:1 00:08:08.285 Error Recovery Timeout: Unlimited 00:08:08.285 Command Set Identifier: NVM (00h) 00:08:08.285 Deallocate: Supported 00:08:08.285 Deallocated/Unwritten Error: Supported 00:08:08.285 Deallocated Read Value: All 0x00 00:08:08.285 Deallocate in Write Zeroes: Not Supported 00:08:08.285 Deallocated Guard Field: 0xFFFF 00:08:08.285 Flush: Supported 00:08:08.285 Reservation: Not Supported 00:08:08.285 Namespace Sharing Capabilities: Multiple Controllers 00:08:08.285 Size (in LBAs): 262144 (1GiB) 00:08:08.285 Capacity (in LBAs): 262144 (1GiB) 00:08:08.285 Utilization (in LBAs): 262144 (1GiB) 00:08:08.285 Thin Provisioning: Not Supported 00:08:08.285 Per-NS Atomic Units: No 00:08:08.285 Maximum Single Source Range Length: 128 00:08:08.285 Maximum Copy Length: 128 00:08:08.285 Maximum Source Range Count: 128 00:08:08.285 NGUID/EUI64 Never Reused: No 00:08:08.285 Namespace Write Protected: No 00:08:08.285 Endurance group ID: 1 00:08:08.285 Number of LBA Formats: 8 00:08:08.285 Current LBA Format: LBA Format #04 00:08:08.285 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:08.285 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:08.285 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:08.285 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:08.285 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:08.285 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:08.285 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:08.285 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:08.285 00:08:08.285 Get Feature FDP: 00:08:08.285 ================ 00:08:08.285 Enabled: Yes 00:08:08.285 FDP configuration index: 0 00:08:08.285 00:08:08.285 FDP configurations log page 00:08:08.285 =========================== 00:08:08.285 Number of FDP configurations: 1 00:08:08.285 Version: 0 00:08:08.285 Size: 112 00:08:08.285 FDP Configuration Descriptor: 0 00:08:08.285 Descriptor Size: 96 00:08:08.285 Reclaim Group Identifier format: 2 00:08:08.285 FDP Volatile Write Cache: Not Present 00:08:08.285 FDP Configuration: Valid 00:08:08.285 Vendor Specific Size: 0 00:08:08.285 Number of Reclaim Groups: 2 00:08:08.285 Number of Recalim Unit Handles: 8 00:08:08.285 Max Placement Identifiers: 128 00:08:08.285 Number of Namespaces Suppprted: 256 00:08:08.285 Reclaim unit Nominal Size: 6000000 bytes 00:08:08.285 Estimated Reclaim Unit Time Limit: Not Reported 00:08:08.285 RUH Desc #000: RUH Type: Initially Isolated 00:08:08.285 RUH Desc #001: RUH Type: Initially Isolated 00:08:08.285 RUH Desc #002: RUH Type: Initially Isolated 00:08:08.285 RUH Desc #003: RUH Type: Initially Isolated 00:08:08.285 RUH Desc #004: RUH Type: Initially Isolated 00:08:08.285 RUH Desc #005: RUH Type: Initially Isolated 00:08:08.285 RUH Desc #006: RUH Type: Initially Isolated 00:08:08.285 RUH Desc #007: RUH Type: Initially Isolated 00:08:08.286 00:08:08.286 FDP reclaim unit handle usage log page 00:08:08.286 ====================================== 00:08:08.286 Number of Reclaim Unit Handles: 8 00:08:08.286 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:08.286 RUH Usage Desc #001: RUH Attributes: Unused 00:08:08.286 RUH Usage Desc #002: RUH Attributes: Unused 00:08:08.286 RUH Usage Desc #003: RUH Attributes: Unused 00:08:08.286 RUH Usage Desc #004: RUH Attributes: Unused 00:08:08.286 RUH Usage Desc #005: RUH Attributes: Unused 00:08:08.286 RUH Usage Desc #006: RUH Attributes: Unused 00:08:08.286 RUH Usage Desc #007: RUH Attributes: Unused 00:08:08.286 00:08:08.286 FDP statistics log page 00:08:08.286 ======================= 00:08:08.286 Host bytes with metadata written: 544251904 00:08:08.286 Medi[2024-11-28 08:52:02.264863] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 75201 terminated unexpected 00:08:08.286 a bytes with metadata written: 544321536 00:08:08.286 Media bytes erased: 0 00:08:08.286 00:08:08.286 FDP events log page 00:08:08.286 =================== 00:08:08.286 Number of FDP events: 0 00:08:08.286 00:08:08.286 NVM Specific Namespace Data 00:08:08.286 =========================== 00:08:08.286 Logical Block Storage Tag Mask: 0 00:08:08.286 Protection Information Capabilities: 00:08:08.286 16b Guard Protection Information Storage Tag Support: No 00:08:08.286 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:08.286 Storage Tag Check Read Support: No 00:08:08.286 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.286 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.286 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.286 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.286 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.286 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.286 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.286 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.286 ===================================================== 00:08:08.286 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:08.286 ===================================================== 00:08:08.286 Controller Capabilities/Features 00:08:08.286 ================================ 00:08:08.286 Vendor ID: 1b36 00:08:08.286 Subsystem Vendor ID: 1af4 00:08:08.286 Serial Number: 12342 00:08:08.286 Model Number: QEMU NVMe Ctrl 00:08:08.286 Firmware Version: 8.0.0 00:08:08.286 Recommended Arb Burst: 6 00:08:08.286 IEEE OUI Identifier: 00 54 52 00:08:08.286 Multi-path I/O 00:08:08.286 May have multiple subsystem ports: No 00:08:08.286 May have multiple controllers: No 00:08:08.286 Associated with SR-IOV VF: No 00:08:08.286 Max Data Transfer Size: 524288 00:08:08.286 Max Number of Namespaces: 256 00:08:08.286 Max Number of I/O Queues: 64 00:08:08.286 NVMe Specification Version (VS): 1.4 00:08:08.286 NVMe Specification Version (Identify): 1.4 00:08:08.286 Maximum Queue Entries: 2048 00:08:08.286 Contiguous Queues Required: Yes 00:08:08.286 Arbitration Mechanisms Supported 00:08:08.286 Weighted Round Robin: Not Supported 00:08:08.286 Vendor Specific: Not Supported 00:08:08.286 Reset Timeout: 7500 ms 00:08:08.286 Doorbell Stride: 4 bytes 00:08:08.286 NVM Subsystem Reset: Not Supported 00:08:08.286 Command Sets Supported 00:08:08.286 NVM Command Set: Supported 00:08:08.286 Boot Partition: Not Supported 00:08:08.286 Memory Page Size Minimum: 4096 bytes 00:08:08.286 Memory Page Size Maximum: 65536 bytes 00:08:08.286 Persistent Memory Region: Not Supported 00:08:08.286 Optional Asynchronous Events Supported 00:08:08.286 Namespace Attribute Notices: Supported 00:08:08.286 Firmware Activation Notices: Not Supported 00:08:08.286 ANA Change Notices: Not Supported 00:08:08.286 PLE Aggregate Log Change Notices: Not Supported 00:08:08.286 LBA Status Info Alert Notices: Not Supported 00:08:08.286 EGE Aggregate Log Change Notices: Not Supported 00:08:08.286 Normal NVM Subsystem Shutdown event: Not Supported 00:08:08.286 Zone Descriptor Change Notices: Not Supported 00:08:08.286 Discovery Log Change Notices: Not Supported 00:08:08.286 Controller Attributes 00:08:08.286 128-bit Host Identifier: Not Supported 00:08:08.286 Non-Operational Permissive Mode: Not Supported 00:08:08.286 NVM Sets: Not Supported 00:08:08.286 Read Recovery Levels: Not Supported 00:08:08.286 Endurance Groups: Not Supported 00:08:08.286 Predictable Latency Mode: Not Supported 00:08:08.286 Traffic Based Keep ALive: Not Supported 00:08:08.286 Namespace Granularity: Not Supported 00:08:08.286 SQ Associations: Not Supported 00:08:08.286 UUID List: Not Supported 00:08:08.286 Multi-Domain Subsystem: Not Supported 00:08:08.286 Fixed Capacity Management: Not Supported 00:08:08.286 Variable Capacity Management: Not Supported 00:08:08.286 Delete Endurance Group: Not Supported 00:08:08.286 Delete NVM Set: Not Supported 00:08:08.286 Extended LBA Formats Supported: Supported 00:08:08.286 Flexible Data Placement Supported: Not Supported 00:08:08.286 00:08:08.286 Controller Memory Buffer Support 00:08:08.286 ================================ 00:08:08.286 Supported: No 00:08:08.286 00:08:08.286 Persistent Memory Region Support 00:08:08.286 ================================ 00:08:08.286 Supported: No 00:08:08.286 00:08:08.286 Admin Command Set Attributes 00:08:08.286 ============================ 00:08:08.286 Security Send/Receive: Not Supported 00:08:08.286 Format NVM: Supported 00:08:08.286 Firmware Activate/Download: Not Supported 00:08:08.286 Namespace Management: Supported 00:08:08.286 Device Self-Test: Not Supported 00:08:08.286 Directives: Supported 00:08:08.286 NVMe-MI: Not Supported 00:08:08.286 Virtualization Management: Not Supported 00:08:08.286 Doorbell Buffer Config: Supported 00:08:08.286 Get LBA Status Capability: Not Supported 00:08:08.286 Command & Feature Lockdown Capability: Not Supported 00:08:08.286 Abort Command Limit: 4 00:08:08.286 Async Event Request Limit: 4 00:08:08.286 Number of Firmware Slots: N/A 00:08:08.286 Firmware Slot 1 Read-Only: N/A 00:08:08.286 Firmware Activation Without Reset: N/A 00:08:08.286 Multiple Update Detection Support: N/A 00:08:08.286 Firmware Update Granularity: No Information Provided 00:08:08.286 Per-Namespace SMART Log: Yes 00:08:08.286 Asymmetric Namespace Access Log Page: Not Supported 00:08:08.286 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:08.286 Command Effects Log Page: Supported 00:08:08.286 Get Log Page Extended Data: Supported 00:08:08.286 Telemetry Log Pages: Not Supported 00:08:08.286 Persistent Event Log Pages: Not Supported 00:08:08.286 Supported Log Pages Log Page: May Support 00:08:08.286 Commands Supported & Effects Log Page: Not Supported 00:08:08.286 Feature Identifiers & Effects Log Page:May Support 00:08:08.286 NVMe-MI Commands & Effects Log Page: May Support 00:08:08.286 Data Area 4 for Telemetry Log: Not Supported 00:08:08.286 Error Log Page Entries Supported: 1 00:08:08.286 Keep Alive: Not Supported 00:08:08.286 00:08:08.286 NVM Command Set Attributes 00:08:08.286 ========================== 00:08:08.286 Submission Queue Entry Size 00:08:08.286 Max: 64 00:08:08.286 Min: 64 00:08:08.286 Completion Queue Entry Size 00:08:08.286 Max: 16 00:08:08.286 Min: 16 00:08:08.286 Number of Namespaces: 256 00:08:08.286 Compare Command: Supported 00:08:08.286 Write Uncorrectable Command: Not Supported 00:08:08.286 Dataset Management Command: Supported 00:08:08.286 Write Zeroes Command: Supported 00:08:08.286 Set Features Save Field: Supported 00:08:08.286 Reservations: Not Supported 00:08:08.286 Timestamp: Supported 00:08:08.286 Copy: Supported 00:08:08.286 Volatile Write Cache: Present 00:08:08.287 Atomic Write Unit (Normal): 1 00:08:08.287 Atomic Write Unit (PFail): 1 00:08:08.287 Atomic Compare & Write Unit: 1 00:08:08.287 Fused Compare & Write: Not Supported 00:08:08.287 Scatter-Gather List 00:08:08.287 SGL Command Set: Supported 00:08:08.287 SGL Keyed: Not Supported 00:08:08.287 SGL Bit Bucket Descriptor: Not Supported 00:08:08.287 SGL Metadata Pointer: Not Supported 00:08:08.287 Oversized SGL: Not Supported 00:08:08.287 SGL Metadata Address: Not Supported 00:08:08.287 SGL Offset: Not Supported 00:08:08.287 Transport SGL Data Block: Not Supported 00:08:08.287 Replay Protected Memory Block: Not Supported 00:08:08.287 00:08:08.287 Firmware Slot Information 00:08:08.287 ========================= 00:08:08.287 Active slot: 1 00:08:08.287 Slot 1 Firmware Revision: 1.0 00:08:08.287 00:08:08.287 00:08:08.287 Commands Supported and Effects 00:08:08.287 ============================== 00:08:08.287 Admin Commands 00:08:08.287 -------------- 00:08:08.287 Delete I/O Submission Queue (00h): Supported 00:08:08.287 Create I/O Submission Queue (01h): Supported 00:08:08.287 Get Log Page (02h): Supported 00:08:08.287 Delete I/O Completion Queue (04h): Supported 00:08:08.287 Create I/O Completion Queue (05h): Supported 00:08:08.287 Identify (06h): Supported 00:08:08.287 Abort (08h): Supported 00:08:08.287 Set Features (09h): Supported 00:08:08.287 Get Features (0Ah): Supported 00:08:08.287 Asynchronous Event Request (0Ch): Supported 00:08:08.287 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:08.287 Directive Send (19h): Supported 00:08:08.287 Directive Receive (1Ah): Supported 00:08:08.287 Virtualization Management (1Ch): Supported 00:08:08.287 Doorbell Buffer Config (7Ch): Supported 00:08:08.287 Format NVM (80h): Supported LBA-Change 00:08:08.287 I/O Commands 00:08:08.287 ------------ 00:08:08.287 Flush (00h): Supported LBA-Change 00:08:08.287 Write (01h): Supported LBA-Change 00:08:08.287 Read (02h): Supported 00:08:08.287 Compare (05h): Supported 00:08:08.287 Write Zeroes (08h): Supported LBA-Change 00:08:08.287 Dataset Management (09h): Supported LBA-Change 00:08:08.287 Unknown (0Ch): Supported 00:08:08.287 Unknown (12h): Supported 00:08:08.287 Copy (19h): Supported LBA-Change 00:08:08.287 Unknown (1Dh): Supported LBA-Change 00:08:08.287 00:08:08.287 Error Log 00:08:08.287 ========= 00:08:08.287 00:08:08.287 Arbitration 00:08:08.287 =========== 00:08:08.287 Arbitration Burst: no limit 00:08:08.287 00:08:08.287 Power Management 00:08:08.287 ================ 00:08:08.287 Number of Power States: 1 00:08:08.287 Current Power State: Power State #0 00:08:08.287 Power State #0: 00:08:08.287 Max Power: 25.00 W 00:08:08.287 Non-Operational State: Operational 00:08:08.287 Entry Latency: 16 microseconds 00:08:08.287 Exit Latency: 4 microseconds 00:08:08.287 Relative Read Throughput: 0 00:08:08.287 Relative Read Latency: 0 00:08:08.287 Relative Write Throughput: 0 00:08:08.287 Relative Write Latency: 0 00:08:08.287 Idle Power: Not Reported 00:08:08.287 Active Power: Not Reported 00:08:08.287 Non-Operational Permissive Mode: Not Supported 00:08:08.287 00:08:08.287 Health Information 00:08:08.287 ================== 00:08:08.287 Critical Warnings: 00:08:08.287 Available Spare Space: OK 00:08:08.287 Temperature: OK 00:08:08.287 Device Reliability: OK 00:08:08.287 Read Only: No 00:08:08.287 Volatile Memory Backup: OK 00:08:08.287 Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.287 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:08.287 Available Spare: 0% 00:08:08.287 Available Spare Threshold: 0% 00:08:08.287 Life Percentage Used: 0% 00:08:08.287 Data Units Read: 2215 00:08:08.287 Data Units Written: 2002 00:08:08.287 Host Read Commands: 115227 00:08:08.287 Host Write Commands: 113496 00:08:08.287 Controller Busy Time: 0 minutes 00:08:08.287 Power Cycles: 0 00:08:08.287 Power On Hours: 0 hours 00:08:08.287 Unsafe Shutdowns: 0 00:08:08.287 Unrecoverable Media Errors: 0 00:08:08.287 Lifetime Error Log Entries: 0 00:08:08.287 Warning Temperature Time: 0 minutes 00:08:08.287 Critical Temperature Time: 0 minutes 00:08:08.287 00:08:08.287 Number of Queues 00:08:08.287 ================ 00:08:08.287 Number of I/O Submission Queues: 64 00:08:08.287 Number of I/O Completion Queues: 64 00:08:08.287 00:08:08.287 ZNS Specific Controller Data 00:08:08.287 ============================ 00:08:08.287 Zone Append Size Limit: 0 00:08:08.287 00:08:08.287 00:08:08.287 Active Namespaces 00:08:08.287 ================= 00:08:08.287 Namespace ID:1 00:08:08.287 Error Recovery Timeout: Unlimited 00:08:08.287 Command Set Identifier: NVM (00h) 00:08:08.287 Deallocate: Supported 00:08:08.287 Deallocated/Unwritten Error: Supported 00:08:08.287 Deallocated Read Value: All 0x00 00:08:08.287 Deallocate in Write Zeroes: Not Supported 00:08:08.287 Deallocated Guard Field: 0xFFFF 00:08:08.287 Flush: Supported 00:08:08.287 Reservation: Not Supported 00:08:08.287 Namespace Sharing Capabilities: Private 00:08:08.287 Size (in LBAs): 1048576 (4GiB) 00:08:08.287 Capacity (in LBAs): 1048576 (4GiB) 00:08:08.287 Utilization (in LBAs): 1048576 (4GiB) 00:08:08.287 Thin Provisioning: Not Supported 00:08:08.287 Per-NS Atomic Units: No 00:08:08.287 Maximum Single Source Range Length: 128 00:08:08.287 Maximum Copy Length: 128 00:08:08.287 Maximum Source Range Count: 128 00:08:08.287 NGUID/EUI64 Never Reused: No 00:08:08.287 Namespace Write Protected: No 00:08:08.287 Number of LBA Formats: 8 00:08:08.287 Current LBA Format: LBA Format #04 00:08:08.287 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:08.287 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:08.287 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:08.287 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:08.287 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:08.287 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:08.287 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:08.287 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:08.287 00:08:08.287 NVM Specific Namespace Data 00:08:08.287 =========================== 00:08:08.287 Logical Block Storage Tag Mask: 0 00:08:08.287 Protection Information Capabilities: 00:08:08.287 16b Guard Protection Information Storage Tag Support: No 00:08:08.287 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:08.287 Storage Tag Check Read Support: No 00:08:08.287 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.287 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.287 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.287 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.287 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.287 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.287 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.287 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.287 Namespace ID:2 00:08:08.287 Error Recovery Timeout: Unlimited 00:08:08.287 Command Set Identifier: NVM (00h) 00:08:08.287 Deallocate: Supported 00:08:08.287 Deallocated/Unwritten Error: Supported 00:08:08.287 Deallocated Read Value: All 0x00 00:08:08.287 Deallocate in Write Zeroes: Not Supported 00:08:08.287 Deallocated Guard Field: 0xFFFF 00:08:08.287 Flush: Supported 00:08:08.287 Reservation: Not Supported 00:08:08.287 Namespace Sharing Capabilities: Private 00:08:08.287 Size (in LBAs): 1048576 (4GiB) 00:08:08.287 Capacity (in LBAs): 1048576 (4GiB) 00:08:08.287 Utilization (in LBAs): 1048576 (4GiB) 00:08:08.287 Thin Provisioning: Not Supported 00:08:08.287 Per-NS Atomic Units: No 00:08:08.287 Maximum Single Source Range Length: 128 00:08:08.287 Maximum Copy Length: 128 00:08:08.287 Maximum Source Range Count: 128 00:08:08.287 NGUID/EUI64 Never Reused: No 00:08:08.287 Namespace Write Protected: No 00:08:08.287 Number of LBA Formats: 8 00:08:08.287 Current LBA Format: LBA Format #04 00:08:08.287 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:08.287 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:08.287 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:08.287 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:08.287 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:08.287 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:08.287 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:08.287 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:08.287 00:08:08.287 NVM Specific Namespace Data 00:08:08.287 =========================== 00:08:08.287 Logical Block Storage Tag Mask: 0 00:08:08.287 Protection Information Capabilities: 00:08:08.288 16b Guard Protection Information Storage Tag Support: No 00:08:08.288 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:08.288 Storage Tag Check Read Support: No 00:08:08.288 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Namespace ID:3 00:08:08.288 Error Recovery Timeout: Unlimited 00:08:08.288 Command Set Identifier: NVM (00h) 00:08:08.288 Deallocate: Supported 00:08:08.288 Deallocated/Unwritten Error: Supported 00:08:08.288 Deallocated Read Value: All 0x00 00:08:08.288 Deallocate in Write Zeroes: Not Supported 00:08:08.288 Deallocated Guard Field: 0xFFFF 00:08:08.288 Flush: Supported 00:08:08.288 Reservation: Not Supported 00:08:08.288 Namespace Sharing Capabilities: Private 00:08:08.288 Size (in LBAs): 1048576 (4GiB) 00:08:08.288 Capacity (in LBAs): 1048576 (4GiB) 00:08:08.288 Utilization (in LBAs): 1048576 (4GiB) 00:08:08.288 Thin Provisioning: Not Supported 00:08:08.288 Per-NS Atomic Units: No 00:08:08.288 Maximum Single Source Range Length: 128 00:08:08.288 Maximum Copy Length: 128 00:08:08.288 Maximum Source Range Count: 128 00:08:08.288 NGUID/EUI64 Never Reused: No 00:08:08.288 Namespace Write Protected: No 00:08:08.288 Number of LBA Formats: 8 00:08:08.288 Current LBA Format: LBA Format #04 00:08:08.288 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:08.288 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:08.288 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:08.288 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:08.288 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:08.288 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:08.288 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:08.288 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:08.288 00:08:08.288 NVM Specific Namespace Data 00:08:08.288 =========================== 00:08:08.288 Logical Block Storage Tag Mask: 0 00:08:08.288 Protection Information Capabilities: 00:08:08.288 16b Guard Protection Information Storage Tag Support: No 00:08:08.288 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:08.288 Storage Tag Check Read Support: No 00:08:08.288 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.288 08:52:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:08.288 08:52:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:08.550 ===================================================== 00:08:08.550 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:08.550 ===================================================== 00:08:08.550 Controller Capabilities/Features 00:08:08.550 ================================ 00:08:08.550 Vendor ID: 1b36 00:08:08.550 Subsystem Vendor ID: 1af4 00:08:08.550 Serial Number: 12340 00:08:08.550 Model Number: QEMU NVMe Ctrl 00:08:08.550 Firmware Version: 8.0.0 00:08:08.550 Recommended Arb Burst: 6 00:08:08.550 IEEE OUI Identifier: 00 54 52 00:08:08.550 Multi-path I/O 00:08:08.550 May have multiple subsystem ports: No 00:08:08.550 May have multiple controllers: No 00:08:08.550 Associated with SR-IOV VF: No 00:08:08.550 Max Data Transfer Size: 524288 00:08:08.550 Max Number of Namespaces: 256 00:08:08.550 Max Number of I/O Queues: 64 00:08:08.550 NVMe Specification Version (VS): 1.4 00:08:08.550 NVMe Specification Version (Identify): 1.4 00:08:08.550 Maximum Queue Entries: 2048 00:08:08.550 Contiguous Queues Required: Yes 00:08:08.550 Arbitration Mechanisms Supported 00:08:08.550 Weighted Round Robin: Not Supported 00:08:08.550 Vendor Specific: Not Supported 00:08:08.550 Reset Timeout: 7500 ms 00:08:08.550 Doorbell Stride: 4 bytes 00:08:08.550 NVM Subsystem Reset: Not Supported 00:08:08.550 Command Sets Supported 00:08:08.550 NVM Command Set: Supported 00:08:08.550 Boot Partition: Not Supported 00:08:08.550 Memory Page Size Minimum: 4096 bytes 00:08:08.550 Memory Page Size Maximum: 65536 bytes 00:08:08.550 Persistent Memory Region: Not Supported 00:08:08.550 Optional Asynchronous Events Supported 00:08:08.550 Namespace Attribute Notices: Supported 00:08:08.550 Firmware Activation Notices: Not Supported 00:08:08.550 ANA Change Notices: Not Supported 00:08:08.550 PLE Aggregate Log Change Notices: Not Supported 00:08:08.550 LBA Status Info Alert Notices: Not Supported 00:08:08.550 EGE Aggregate Log Change Notices: Not Supported 00:08:08.550 Normal NVM Subsystem Shutdown event: Not Supported 00:08:08.550 Zone Descriptor Change Notices: Not Supported 00:08:08.550 Discovery Log Change Notices: Not Supported 00:08:08.550 Controller Attributes 00:08:08.550 128-bit Host Identifier: Not Supported 00:08:08.550 Non-Operational Permissive Mode: Not Supported 00:08:08.550 NVM Sets: Not Supported 00:08:08.550 Read Recovery Levels: Not Supported 00:08:08.550 Endurance Groups: Not Supported 00:08:08.550 Predictable Latency Mode: Not Supported 00:08:08.550 Traffic Based Keep ALive: Not Supported 00:08:08.550 Namespace Granularity: Not Supported 00:08:08.550 SQ Associations: Not Supported 00:08:08.550 UUID List: Not Supported 00:08:08.550 Multi-Domain Subsystem: Not Supported 00:08:08.550 Fixed Capacity Management: Not Supported 00:08:08.550 Variable Capacity Management: Not Supported 00:08:08.550 Delete Endurance Group: Not Supported 00:08:08.550 Delete NVM Set: Not Supported 00:08:08.550 Extended LBA Formats Supported: Supported 00:08:08.550 Flexible Data Placement Supported: Not Supported 00:08:08.550 00:08:08.550 Controller Memory Buffer Support 00:08:08.550 ================================ 00:08:08.550 Supported: No 00:08:08.550 00:08:08.550 Persistent Memory Region Support 00:08:08.550 ================================ 00:08:08.550 Supported: No 00:08:08.550 00:08:08.550 Admin Command Set Attributes 00:08:08.550 ============================ 00:08:08.550 Security Send/Receive: Not Supported 00:08:08.550 Format NVM: Supported 00:08:08.550 Firmware Activate/Download: Not Supported 00:08:08.550 Namespace Management: Supported 00:08:08.550 Device Self-Test: Not Supported 00:08:08.550 Directives: Supported 00:08:08.550 NVMe-MI: Not Supported 00:08:08.550 Virtualization Management: Not Supported 00:08:08.550 Doorbell Buffer Config: Supported 00:08:08.550 Get LBA Status Capability: Not Supported 00:08:08.550 Command & Feature Lockdown Capability: Not Supported 00:08:08.550 Abort Command Limit: 4 00:08:08.550 Async Event Request Limit: 4 00:08:08.550 Number of Firmware Slots: N/A 00:08:08.550 Firmware Slot 1 Read-Only: N/A 00:08:08.550 Firmware Activation Without Reset: N/A 00:08:08.550 Multiple Update Detection Support: N/A 00:08:08.550 Firmware Update Granularity: No Information Provided 00:08:08.550 Per-Namespace SMART Log: Yes 00:08:08.550 Asymmetric Namespace Access Log Page: Not Supported 00:08:08.550 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:08.550 Command Effects Log Page: Supported 00:08:08.551 Get Log Page Extended Data: Supported 00:08:08.551 Telemetry Log Pages: Not Supported 00:08:08.551 Persistent Event Log Pages: Not Supported 00:08:08.551 Supported Log Pages Log Page: May Support 00:08:08.551 Commands Supported & Effects Log Page: Not Supported 00:08:08.551 Feature Identifiers & Effects Log Page:May Support 00:08:08.551 NVMe-MI Commands & Effects Log Page: May Support 00:08:08.551 Data Area 4 for Telemetry Log: Not Supported 00:08:08.551 Error Log Page Entries Supported: 1 00:08:08.551 Keep Alive: Not Supported 00:08:08.551 00:08:08.551 NVM Command Set Attributes 00:08:08.551 ========================== 00:08:08.551 Submission Queue Entry Size 00:08:08.551 Max: 64 00:08:08.551 Min: 64 00:08:08.551 Completion Queue Entry Size 00:08:08.551 Max: 16 00:08:08.551 Min: 16 00:08:08.551 Number of Namespaces: 256 00:08:08.551 Compare Command: Supported 00:08:08.551 Write Uncorrectable Command: Not Supported 00:08:08.551 Dataset Management Command: Supported 00:08:08.551 Write Zeroes Command: Supported 00:08:08.551 Set Features Save Field: Supported 00:08:08.551 Reservations: Not Supported 00:08:08.551 Timestamp: Supported 00:08:08.551 Copy: Supported 00:08:08.551 Volatile Write Cache: Present 00:08:08.551 Atomic Write Unit (Normal): 1 00:08:08.551 Atomic Write Unit (PFail): 1 00:08:08.551 Atomic Compare & Write Unit: 1 00:08:08.551 Fused Compare & Write: Not Supported 00:08:08.551 Scatter-Gather List 00:08:08.551 SGL Command Set: Supported 00:08:08.551 SGL Keyed: Not Supported 00:08:08.551 SGL Bit Bucket Descriptor: Not Supported 00:08:08.551 SGL Metadata Pointer: Not Supported 00:08:08.551 Oversized SGL: Not Supported 00:08:08.551 SGL Metadata Address: Not Supported 00:08:08.551 SGL Offset: Not Supported 00:08:08.551 Transport SGL Data Block: Not Supported 00:08:08.551 Replay Protected Memory Block: Not Supported 00:08:08.551 00:08:08.551 Firmware Slot Information 00:08:08.551 ========================= 00:08:08.551 Active slot: 1 00:08:08.551 Slot 1 Firmware Revision: 1.0 00:08:08.551 00:08:08.551 00:08:08.551 Commands Supported and Effects 00:08:08.551 ============================== 00:08:08.551 Admin Commands 00:08:08.551 -------------- 00:08:08.551 Delete I/O Submission Queue (00h): Supported 00:08:08.551 Create I/O Submission Queue (01h): Supported 00:08:08.551 Get Log Page (02h): Supported 00:08:08.551 Delete I/O Completion Queue (04h): Supported 00:08:08.551 Create I/O Completion Queue (05h): Supported 00:08:08.551 Identify (06h): Supported 00:08:08.551 Abort (08h): Supported 00:08:08.551 Set Features (09h): Supported 00:08:08.551 Get Features (0Ah): Supported 00:08:08.551 Asynchronous Event Request (0Ch): Supported 00:08:08.551 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:08.551 Directive Send (19h): Supported 00:08:08.551 Directive Receive (1Ah): Supported 00:08:08.551 Virtualization Management (1Ch): Supported 00:08:08.551 Doorbell Buffer Config (7Ch): Supported 00:08:08.551 Format NVM (80h): Supported LBA-Change 00:08:08.551 I/O Commands 00:08:08.551 ------------ 00:08:08.551 Flush (00h): Supported LBA-Change 00:08:08.551 Write (01h): Supported LBA-Change 00:08:08.551 Read (02h): Supported 00:08:08.551 Compare (05h): Supported 00:08:08.551 Write Zeroes (08h): Supported LBA-Change 00:08:08.551 Dataset Management (09h): Supported LBA-Change 00:08:08.551 Unknown (0Ch): Supported 00:08:08.551 Unknown (12h): Supported 00:08:08.551 Copy (19h): Supported LBA-Change 00:08:08.551 Unknown (1Dh): Supported LBA-Change 00:08:08.551 00:08:08.551 Error Log 00:08:08.551 ========= 00:08:08.551 00:08:08.551 Arbitration 00:08:08.551 =========== 00:08:08.551 Arbitration Burst: no limit 00:08:08.551 00:08:08.551 Power Management 00:08:08.551 ================ 00:08:08.551 Number of Power States: 1 00:08:08.551 Current Power State: Power State #0 00:08:08.551 Power State #0: 00:08:08.551 Max Power: 25.00 W 00:08:08.551 Non-Operational State: Operational 00:08:08.551 Entry Latency: 16 microseconds 00:08:08.551 Exit Latency: 4 microseconds 00:08:08.551 Relative Read Throughput: 0 00:08:08.551 Relative Read Latency: 0 00:08:08.551 Relative Write Throughput: 0 00:08:08.551 Relative Write Latency: 0 00:08:08.551 Idle Power: Not Reported 00:08:08.551 Active Power: Not Reported 00:08:08.551 Non-Operational Permissive Mode: Not Supported 00:08:08.551 00:08:08.551 Health Information 00:08:08.551 ================== 00:08:08.551 Critical Warnings: 00:08:08.551 Available Spare Space: OK 00:08:08.551 Temperature: OK 00:08:08.551 Device Reliability: OK 00:08:08.551 Read Only: No 00:08:08.551 Volatile Memory Backup: OK 00:08:08.551 Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.551 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:08.551 Available Spare: 0% 00:08:08.551 Available Spare Threshold: 0% 00:08:08.551 Life Percentage Used: 0% 00:08:08.551 Data Units Read: 662 00:08:08.551 Data Units Written: 590 00:08:08.551 Host Read Commands: 37588 00:08:08.551 Host Write Commands: 37374 00:08:08.551 Controller Busy Time: 0 minutes 00:08:08.551 Power Cycles: 0 00:08:08.551 Power On Hours: 0 hours 00:08:08.551 Unsafe Shutdowns: 0 00:08:08.551 Unrecoverable Media Errors: 0 00:08:08.551 Lifetime Error Log Entries: 0 00:08:08.551 Warning Temperature Time: 0 minutes 00:08:08.551 Critical Temperature Time: 0 minutes 00:08:08.551 00:08:08.551 Number of Queues 00:08:08.551 ================ 00:08:08.551 Number of I/O Submission Queues: 64 00:08:08.551 Number of I/O Completion Queues: 64 00:08:08.551 00:08:08.551 ZNS Specific Controller Data 00:08:08.551 ============================ 00:08:08.551 Zone Append Size Limit: 0 00:08:08.551 00:08:08.551 00:08:08.551 Active Namespaces 00:08:08.551 ================= 00:08:08.551 Namespace ID:1 00:08:08.551 Error Recovery Timeout: Unlimited 00:08:08.551 Command Set Identifier: NVM (00h) 00:08:08.551 Deallocate: Supported 00:08:08.551 Deallocated/Unwritten Error: Supported 00:08:08.551 Deallocated Read Value: All 0x00 00:08:08.551 Deallocate in Write Zeroes: Not Supported 00:08:08.551 Deallocated Guard Field: 0xFFFF 00:08:08.551 Flush: Supported 00:08:08.551 Reservation: Not Supported 00:08:08.551 Metadata Transferred as: Separate Metadata Buffer 00:08:08.551 Namespace Sharing Capabilities: Private 00:08:08.551 Size (in LBAs): 1548666 (5GiB) 00:08:08.551 Capacity (in LBAs): 1548666 (5GiB) 00:08:08.551 Utilization (in LBAs): 1548666 (5GiB) 00:08:08.551 Thin Provisioning: Not Supported 00:08:08.551 Per-NS Atomic Units: No 00:08:08.551 Maximum Single Source Range Length: 128 00:08:08.551 Maximum Copy Length: 128 00:08:08.551 Maximum Source Range Count: 128 00:08:08.551 NGUID/EUI64 Never Reused: No 00:08:08.551 Namespace Write Protected: No 00:08:08.551 Number of LBA Formats: 8 00:08:08.551 Current LBA Format: LBA Format #07 00:08:08.551 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:08.551 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:08.551 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:08.551 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:08.551 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:08.551 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:08.551 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:08.551 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:08.551 00:08:08.551 NVM Specific Namespace Data 00:08:08.551 =========================== 00:08:08.551 Logical Block Storage Tag Mask: 0 00:08:08.551 Protection Information Capabilities: 00:08:08.551 16b Guard Protection Information Storage Tag Support: No 00:08:08.551 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:08.551 Storage Tag Check Read Support: No 00:08:08.551 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.551 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.551 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.551 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.551 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.551 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.551 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.551 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.551 08:52:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:08.551 08:52:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:08.815 ===================================================== 00:08:08.816 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:08.816 ===================================================== 00:08:08.816 Controller Capabilities/Features 00:08:08.816 ================================ 00:08:08.816 Vendor ID: 1b36 00:08:08.816 Subsystem Vendor ID: 1af4 00:08:08.816 Serial Number: 12341 00:08:08.816 Model Number: QEMU NVMe Ctrl 00:08:08.816 Firmware Version: 8.0.0 00:08:08.816 Recommended Arb Burst: 6 00:08:08.816 IEEE OUI Identifier: 00 54 52 00:08:08.816 Multi-path I/O 00:08:08.816 May have multiple subsystem ports: No 00:08:08.816 May have multiple controllers: No 00:08:08.816 Associated with SR-IOV VF: No 00:08:08.816 Max Data Transfer Size: 524288 00:08:08.816 Max Number of Namespaces: 256 00:08:08.816 Max Number of I/O Queues: 64 00:08:08.816 NVMe Specification Version (VS): 1.4 00:08:08.816 NVMe Specification Version (Identify): 1.4 00:08:08.816 Maximum Queue Entries: 2048 00:08:08.816 Contiguous Queues Required: Yes 00:08:08.816 Arbitration Mechanisms Supported 00:08:08.816 Weighted Round Robin: Not Supported 00:08:08.816 Vendor Specific: Not Supported 00:08:08.816 Reset Timeout: 7500 ms 00:08:08.816 Doorbell Stride: 4 bytes 00:08:08.816 NVM Subsystem Reset: Not Supported 00:08:08.816 Command Sets Supported 00:08:08.816 NVM Command Set: Supported 00:08:08.816 Boot Partition: Not Supported 00:08:08.816 Memory Page Size Minimum: 4096 bytes 00:08:08.816 Memory Page Size Maximum: 65536 bytes 00:08:08.816 Persistent Memory Region: Not Supported 00:08:08.816 Optional Asynchronous Events Supported 00:08:08.816 Namespace Attribute Notices: Supported 00:08:08.816 Firmware Activation Notices: Not Supported 00:08:08.816 ANA Change Notices: Not Supported 00:08:08.816 PLE Aggregate Log Change Notices: Not Supported 00:08:08.816 LBA Status Info Alert Notices: Not Supported 00:08:08.816 EGE Aggregate Log Change Notices: Not Supported 00:08:08.816 Normal NVM Subsystem Shutdown event: Not Supported 00:08:08.816 Zone Descriptor Change Notices: Not Supported 00:08:08.816 Discovery Log Change Notices: Not Supported 00:08:08.816 Controller Attributes 00:08:08.816 128-bit Host Identifier: Not Supported 00:08:08.816 Non-Operational Permissive Mode: Not Supported 00:08:08.816 NVM Sets: Not Supported 00:08:08.816 Read Recovery Levels: Not Supported 00:08:08.816 Endurance Groups: Not Supported 00:08:08.816 Predictable Latency Mode: Not Supported 00:08:08.816 Traffic Based Keep ALive: Not Supported 00:08:08.816 Namespace Granularity: Not Supported 00:08:08.816 SQ Associations: Not Supported 00:08:08.816 UUID List: Not Supported 00:08:08.816 Multi-Domain Subsystem: Not Supported 00:08:08.816 Fixed Capacity Management: Not Supported 00:08:08.816 Variable Capacity Management: Not Supported 00:08:08.816 Delete Endurance Group: Not Supported 00:08:08.816 Delete NVM Set: Not Supported 00:08:08.816 Extended LBA Formats Supported: Supported 00:08:08.816 Flexible Data Placement Supported: Not Supported 00:08:08.816 00:08:08.816 Controller Memory Buffer Support 00:08:08.816 ================================ 00:08:08.816 Supported: No 00:08:08.816 00:08:08.816 Persistent Memory Region Support 00:08:08.816 ================================ 00:08:08.816 Supported: No 00:08:08.816 00:08:08.816 Admin Command Set Attributes 00:08:08.816 ============================ 00:08:08.816 Security Send/Receive: Not Supported 00:08:08.816 Format NVM: Supported 00:08:08.816 Firmware Activate/Download: Not Supported 00:08:08.816 Namespace Management: Supported 00:08:08.816 Device Self-Test: Not Supported 00:08:08.816 Directives: Supported 00:08:08.816 NVMe-MI: Not Supported 00:08:08.816 Virtualization Management: Not Supported 00:08:08.816 Doorbell Buffer Config: Supported 00:08:08.816 Get LBA Status Capability: Not Supported 00:08:08.816 Command & Feature Lockdown Capability: Not Supported 00:08:08.816 Abort Command Limit: 4 00:08:08.816 Async Event Request Limit: 4 00:08:08.816 Number of Firmware Slots: N/A 00:08:08.816 Firmware Slot 1 Read-Only: N/A 00:08:08.816 Firmware Activation Without Reset: N/A 00:08:08.816 Multiple Update Detection Support: N/A 00:08:08.816 Firmware Update Granularity: No Information Provided 00:08:08.816 Per-Namespace SMART Log: Yes 00:08:08.816 Asymmetric Namespace Access Log Page: Not Supported 00:08:08.816 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:08.816 Command Effects Log Page: Supported 00:08:08.816 Get Log Page Extended Data: Supported 00:08:08.816 Telemetry Log Pages: Not Supported 00:08:08.816 Persistent Event Log Pages: Not Supported 00:08:08.816 Supported Log Pages Log Page: May Support 00:08:08.816 Commands Supported & Effects Log Page: Not Supported 00:08:08.816 Feature Identifiers & Effects Log Page:May Support 00:08:08.816 NVMe-MI Commands & Effects Log Page: May Support 00:08:08.816 Data Area 4 for Telemetry Log: Not Supported 00:08:08.816 Error Log Page Entries Supported: 1 00:08:08.816 Keep Alive: Not Supported 00:08:08.816 00:08:08.816 NVM Command Set Attributes 00:08:08.816 ========================== 00:08:08.816 Submission Queue Entry Size 00:08:08.816 Max: 64 00:08:08.816 Min: 64 00:08:08.816 Completion Queue Entry Size 00:08:08.816 Max: 16 00:08:08.816 Min: 16 00:08:08.816 Number of Namespaces: 256 00:08:08.816 Compare Command: Supported 00:08:08.816 Write Uncorrectable Command: Not Supported 00:08:08.816 Dataset Management Command: Supported 00:08:08.816 Write Zeroes Command: Supported 00:08:08.816 Set Features Save Field: Supported 00:08:08.816 Reservations: Not Supported 00:08:08.816 Timestamp: Supported 00:08:08.816 Copy: Supported 00:08:08.816 Volatile Write Cache: Present 00:08:08.816 Atomic Write Unit (Normal): 1 00:08:08.816 Atomic Write Unit (PFail): 1 00:08:08.816 Atomic Compare & Write Unit: 1 00:08:08.816 Fused Compare & Write: Not Supported 00:08:08.816 Scatter-Gather List 00:08:08.816 SGL Command Set: Supported 00:08:08.816 SGL Keyed: Not Supported 00:08:08.816 SGL Bit Bucket Descriptor: Not Supported 00:08:08.816 SGL Metadata Pointer: Not Supported 00:08:08.816 Oversized SGL: Not Supported 00:08:08.816 SGL Metadata Address: Not Supported 00:08:08.816 SGL Offset: Not Supported 00:08:08.816 Transport SGL Data Block: Not Supported 00:08:08.816 Replay Protected Memory Block: Not Supported 00:08:08.816 00:08:08.816 Firmware Slot Information 00:08:08.816 ========================= 00:08:08.816 Active slot: 1 00:08:08.816 Slot 1 Firmware Revision: 1.0 00:08:08.816 00:08:08.816 00:08:08.816 Commands Supported and Effects 00:08:08.816 ============================== 00:08:08.816 Admin Commands 00:08:08.816 -------------- 00:08:08.816 Delete I/O Submission Queue (00h): Supported 00:08:08.816 Create I/O Submission Queue (01h): Supported 00:08:08.816 Get Log Page (02h): Supported 00:08:08.816 Delete I/O Completion Queue (04h): Supported 00:08:08.816 Create I/O Completion Queue (05h): Supported 00:08:08.816 Identify (06h): Supported 00:08:08.816 Abort (08h): Supported 00:08:08.816 Set Features (09h): Supported 00:08:08.816 Get Features (0Ah): Supported 00:08:08.816 Asynchronous Event Request (0Ch): Supported 00:08:08.816 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:08.816 Directive Send (19h): Supported 00:08:08.816 Directive Receive (1Ah): Supported 00:08:08.816 Virtualization Management (1Ch): Supported 00:08:08.816 Doorbell Buffer Config (7Ch): Supported 00:08:08.816 Format NVM (80h): Supported LBA-Change 00:08:08.816 I/O Commands 00:08:08.816 ------------ 00:08:08.816 Flush (00h): Supported LBA-Change 00:08:08.816 Write (01h): Supported LBA-Change 00:08:08.816 Read (02h): Supported 00:08:08.816 Compare (05h): Supported 00:08:08.816 Write Zeroes (08h): Supported LBA-Change 00:08:08.816 Dataset Management (09h): Supported LBA-Change 00:08:08.816 Unknown (0Ch): Supported 00:08:08.816 Unknown (12h): Supported 00:08:08.816 Copy (19h): Supported LBA-Change 00:08:08.816 Unknown (1Dh): Supported LBA-Change 00:08:08.816 00:08:08.816 Error Log 00:08:08.816 ========= 00:08:08.816 00:08:08.816 Arbitration 00:08:08.816 =========== 00:08:08.816 Arbitration Burst: no limit 00:08:08.816 00:08:08.816 Power Management 00:08:08.816 ================ 00:08:08.816 Number of Power States: 1 00:08:08.816 Current Power State: Power State #0 00:08:08.816 Power State #0: 00:08:08.817 Max Power: 25.00 W 00:08:08.817 Non-Operational State: Operational 00:08:08.817 Entry Latency: 16 microseconds 00:08:08.817 Exit Latency: 4 microseconds 00:08:08.817 Relative Read Throughput: 0 00:08:08.817 Relative Read Latency: 0 00:08:08.817 Relative Write Throughput: 0 00:08:08.817 Relative Write Latency: 0 00:08:08.817 Idle Power: Not Reported 00:08:08.817 Active Power: Not Reported 00:08:08.817 Non-Operational Permissive Mode: Not Supported 00:08:08.817 00:08:08.817 Health Information 00:08:08.817 ================== 00:08:08.817 Critical Warnings: 00:08:08.817 Available Spare Space: OK 00:08:08.817 Temperature: OK 00:08:08.817 Device Reliability: OK 00:08:08.817 Read Only: No 00:08:08.817 Volatile Memory Backup: OK 00:08:08.817 Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.817 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:08.817 Available Spare: 0% 00:08:08.817 Available Spare Threshold: 0% 00:08:08.817 Life Percentage Used: 0% 00:08:08.817 Data Units Read: 1060 00:08:08.817 Data Units Written: 925 00:08:08.817 Host Read Commands: 56957 00:08:08.817 Host Write Commands: 55715 00:08:08.817 Controller Busy Time: 0 minutes 00:08:08.817 Power Cycles: 0 00:08:08.817 Power On Hours: 0 hours 00:08:08.817 Unsafe Shutdowns: 0 00:08:08.817 Unrecoverable Media Errors: 0 00:08:08.817 Lifetime Error Log Entries: 0 00:08:08.817 Warning Temperature Time: 0 minutes 00:08:08.817 Critical Temperature Time: 0 minutes 00:08:08.817 00:08:08.817 Number of Queues 00:08:08.817 ================ 00:08:08.817 Number of I/O Submission Queues: 64 00:08:08.817 Number of I/O Completion Queues: 64 00:08:08.817 00:08:08.817 ZNS Specific Controller Data 00:08:08.817 ============================ 00:08:08.817 Zone Append Size Limit: 0 00:08:08.817 00:08:08.817 00:08:08.817 Active Namespaces 00:08:08.817 ================= 00:08:08.817 Namespace ID:1 00:08:08.817 Error Recovery Timeout: Unlimited 00:08:08.817 Command Set Identifier: NVM (00h) 00:08:08.817 Deallocate: Supported 00:08:08.817 Deallocated/Unwritten Error: Supported 00:08:08.817 Deallocated Read Value: All 0x00 00:08:08.817 Deallocate in Write Zeroes: Not Supported 00:08:08.817 Deallocated Guard Field: 0xFFFF 00:08:08.817 Flush: Supported 00:08:08.817 Reservation: Not Supported 00:08:08.817 Namespace Sharing Capabilities: Private 00:08:08.817 Size (in LBAs): 1310720 (5GiB) 00:08:08.817 Capacity (in LBAs): 1310720 (5GiB) 00:08:08.817 Utilization (in LBAs): 1310720 (5GiB) 00:08:08.817 Thin Provisioning: Not Supported 00:08:08.817 Per-NS Atomic Units: No 00:08:08.817 Maximum Single Source Range Length: 128 00:08:08.817 Maximum Copy Length: 128 00:08:08.817 Maximum Source Range Count: 128 00:08:08.817 NGUID/EUI64 Never Reused: No 00:08:08.817 Namespace Write Protected: No 00:08:08.817 Number of LBA Formats: 8 00:08:08.817 Current LBA Format: LBA Format #04 00:08:08.817 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:08.817 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:08.817 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:08.817 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:08.817 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:08.817 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:08.817 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:08.817 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:08.817 00:08:08.817 NVM Specific Namespace Data 00:08:08.817 =========================== 00:08:08.817 Logical Block Storage Tag Mask: 0 00:08:08.817 Protection Information Capabilities: 00:08:08.817 16b Guard Protection Information Storage Tag Support: No 00:08:08.817 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:08.817 Storage Tag Check Read Support: No 00:08:08.817 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.817 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.817 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.817 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.817 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.817 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.817 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.817 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.817 08:52:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:08.817 08:52:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:08.817 ===================================================== 00:08:08.817 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:08.817 ===================================================== 00:08:08.817 Controller Capabilities/Features 00:08:08.817 ================================ 00:08:08.817 Vendor ID: 1b36 00:08:08.817 Subsystem Vendor ID: 1af4 00:08:08.817 Serial Number: 12342 00:08:08.817 Model Number: QEMU NVMe Ctrl 00:08:08.817 Firmware Version: 8.0.0 00:08:08.817 Recommended Arb Burst: 6 00:08:08.817 IEEE OUI Identifier: 00 54 52 00:08:08.817 Multi-path I/O 00:08:08.817 May have multiple subsystem ports: No 00:08:08.817 May have multiple controllers: No 00:08:08.817 Associated with SR-IOV VF: No 00:08:08.817 Max Data Transfer Size: 524288 00:08:08.817 Max Number of Namespaces: 256 00:08:08.817 Max Number of I/O Queues: 64 00:08:08.817 NVMe Specification Version (VS): 1.4 00:08:08.817 NVMe Specification Version (Identify): 1.4 00:08:08.817 Maximum Queue Entries: 2048 00:08:08.817 Contiguous Queues Required: Yes 00:08:08.817 Arbitration Mechanisms Supported 00:08:08.817 Weighted Round Robin: Not Supported 00:08:08.817 Vendor Specific: Not Supported 00:08:08.817 Reset Timeout: 7500 ms 00:08:08.817 Doorbell Stride: 4 bytes 00:08:08.817 NVM Subsystem Reset: Not Supported 00:08:08.817 Command Sets Supported 00:08:08.817 NVM Command Set: Supported 00:08:08.817 Boot Partition: Not Supported 00:08:08.817 Memory Page Size Minimum: 4096 bytes 00:08:08.817 Memory Page Size Maximum: 65536 bytes 00:08:08.817 Persistent Memory Region: Not Supported 00:08:08.817 Optional Asynchronous Events Supported 00:08:08.817 Namespace Attribute Notices: Supported 00:08:08.817 Firmware Activation Notices: Not Supported 00:08:08.817 ANA Change Notices: Not Supported 00:08:08.817 PLE Aggregate Log Change Notices: Not Supported 00:08:08.817 LBA Status Info Alert Notices: Not Supported 00:08:08.817 EGE Aggregate Log Change Notices: Not Supported 00:08:08.817 Normal NVM Subsystem Shutdown event: Not Supported 00:08:08.817 Zone Descriptor Change Notices: Not Supported 00:08:08.817 Discovery Log Change Notices: Not Supported 00:08:08.817 Controller Attributes 00:08:08.817 128-bit Host Identifier: Not Supported 00:08:08.817 Non-Operational Permissive Mode: Not Supported 00:08:08.817 NVM Sets: Not Supported 00:08:08.817 Read Recovery Levels: Not Supported 00:08:08.817 Endurance Groups: Not Supported 00:08:08.817 Predictable Latency Mode: Not Supported 00:08:08.817 Traffic Based Keep ALive: Not Supported 00:08:08.817 Namespace Granularity: Not Supported 00:08:08.817 SQ Associations: Not Supported 00:08:08.817 UUID List: Not Supported 00:08:08.817 Multi-Domain Subsystem: Not Supported 00:08:08.817 Fixed Capacity Management: Not Supported 00:08:08.817 Variable Capacity Management: Not Supported 00:08:08.817 Delete Endurance Group: Not Supported 00:08:08.817 Delete NVM Set: Not Supported 00:08:08.817 Extended LBA Formats Supported: Supported 00:08:08.817 Flexible Data Placement Supported: Not Supported 00:08:08.817 00:08:08.817 Controller Memory Buffer Support 00:08:08.817 ================================ 00:08:08.817 Supported: No 00:08:08.817 00:08:08.817 Persistent Memory Region Support 00:08:08.817 ================================ 00:08:08.817 Supported: No 00:08:08.817 00:08:08.817 Admin Command Set Attributes 00:08:08.817 ============================ 00:08:08.817 Security Send/Receive: Not Supported 00:08:08.817 Format NVM: Supported 00:08:08.817 Firmware Activate/Download: Not Supported 00:08:08.817 Namespace Management: Supported 00:08:08.817 Device Self-Test: Not Supported 00:08:08.817 Directives: Supported 00:08:08.817 NVMe-MI: Not Supported 00:08:08.817 Virtualization Management: Not Supported 00:08:08.817 Doorbell Buffer Config: Supported 00:08:08.817 Get LBA Status Capability: Not Supported 00:08:08.817 Command & Feature Lockdown Capability: Not Supported 00:08:08.817 Abort Command Limit: 4 00:08:08.817 Async Event Request Limit: 4 00:08:08.817 Number of Firmware Slots: N/A 00:08:08.817 Firmware Slot 1 Read-Only: N/A 00:08:08.817 Firmware Activation Without Reset: N/A 00:08:08.817 Multiple Update Detection Support: N/A 00:08:08.817 Firmware Update Granularity: No Information Provided 00:08:08.817 Per-Namespace SMART Log: Yes 00:08:08.818 Asymmetric Namespace Access Log Page: Not Supported 00:08:08.818 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:08.818 Command Effects Log Page: Supported 00:08:08.818 Get Log Page Extended Data: Supported 00:08:08.818 Telemetry Log Pages: Not Supported 00:08:08.818 Persistent Event Log Pages: Not Supported 00:08:08.818 Supported Log Pages Log Page: May Support 00:08:08.818 Commands Supported & Effects Log Page: Not Supported 00:08:08.818 Feature Identifiers & Effects Log Page:May Support 00:08:08.818 NVMe-MI Commands & Effects Log Page: May Support 00:08:08.818 Data Area 4 for Telemetry Log: Not Supported 00:08:08.818 Error Log Page Entries Supported: 1 00:08:08.818 Keep Alive: Not Supported 00:08:08.818 00:08:08.818 NVM Command Set Attributes 00:08:08.818 ========================== 00:08:08.818 Submission Queue Entry Size 00:08:08.818 Max: 64 00:08:08.818 Min: 64 00:08:08.818 Completion Queue Entry Size 00:08:08.818 Max: 16 00:08:08.818 Min: 16 00:08:08.818 Number of Namespaces: 256 00:08:08.818 Compare Command: Supported 00:08:08.818 Write Uncorrectable Command: Not Supported 00:08:08.818 Dataset Management Command: Supported 00:08:08.818 Write Zeroes Command: Supported 00:08:08.818 Set Features Save Field: Supported 00:08:08.818 Reservations: Not Supported 00:08:08.818 Timestamp: Supported 00:08:08.818 Copy: Supported 00:08:08.818 Volatile Write Cache: Present 00:08:08.818 Atomic Write Unit (Normal): 1 00:08:08.818 Atomic Write Unit (PFail): 1 00:08:08.818 Atomic Compare & Write Unit: 1 00:08:08.818 Fused Compare & Write: Not Supported 00:08:08.818 Scatter-Gather List 00:08:08.818 SGL Command Set: Supported 00:08:08.818 SGL Keyed: Not Supported 00:08:08.818 SGL Bit Bucket Descriptor: Not Supported 00:08:08.818 SGL Metadata Pointer: Not Supported 00:08:08.818 Oversized SGL: Not Supported 00:08:08.818 SGL Metadata Address: Not Supported 00:08:08.818 SGL Offset: Not Supported 00:08:08.818 Transport SGL Data Block: Not Supported 00:08:08.818 Replay Protected Memory Block: Not Supported 00:08:08.818 00:08:08.818 Firmware Slot Information 00:08:08.818 ========================= 00:08:08.818 Active slot: 1 00:08:08.818 Slot 1 Firmware Revision: 1.0 00:08:08.818 00:08:08.818 00:08:08.818 Commands Supported and Effects 00:08:08.818 ============================== 00:08:08.818 Admin Commands 00:08:08.818 -------------- 00:08:08.818 Delete I/O Submission Queue (00h): Supported 00:08:08.818 Create I/O Submission Queue (01h): Supported 00:08:08.818 Get Log Page (02h): Supported 00:08:08.818 Delete I/O Completion Queue (04h): Supported 00:08:08.818 Create I/O Completion Queue (05h): Supported 00:08:08.818 Identify (06h): Supported 00:08:08.818 Abort (08h): Supported 00:08:08.818 Set Features (09h): Supported 00:08:08.818 Get Features (0Ah): Supported 00:08:08.818 Asynchronous Event Request (0Ch): Supported 00:08:08.818 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:08.818 Directive Send (19h): Supported 00:08:08.818 Directive Receive (1Ah): Supported 00:08:08.818 Virtualization Management (1Ch): Supported 00:08:08.818 Doorbell Buffer Config (7Ch): Supported 00:08:08.818 Format NVM (80h): Supported LBA-Change 00:08:08.818 I/O Commands 00:08:08.818 ------------ 00:08:08.818 Flush (00h): Supported LBA-Change 00:08:08.818 Write (01h): Supported LBA-Change 00:08:08.818 Read (02h): Supported 00:08:08.818 Compare (05h): Supported 00:08:08.818 Write Zeroes (08h): Supported LBA-Change 00:08:08.818 Dataset Management (09h): Supported LBA-Change 00:08:08.818 Unknown (0Ch): Supported 00:08:08.818 Unknown (12h): Supported 00:08:08.818 Copy (19h): Supported LBA-Change 00:08:08.818 Unknown (1Dh): Supported LBA-Change 00:08:08.818 00:08:08.818 Error Log 00:08:08.818 ========= 00:08:08.818 00:08:08.818 Arbitration 00:08:08.818 =========== 00:08:08.818 Arbitration Burst: no limit 00:08:08.818 00:08:08.818 Power Management 00:08:08.818 ================ 00:08:08.818 Number of Power States: 1 00:08:08.818 Current Power State: Power State #0 00:08:08.818 Power State #0: 00:08:08.818 Max Power: 25.00 W 00:08:08.818 Non-Operational State: Operational 00:08:08.818 Entry Latency: 16 microseconds 00:08:08.818 Exit Latency: 4 microseconds 00:08:08.818 Relative Read Throughput: 0 00:08:08.818 Relative Read Latency: 0 00:08:08.818 Relative Write Throughput: 0 00:08:08.818 Relative Write Latency: 0 00:08:08.818 Idle Power: Not Reported 00:08:08.818 Active Power: Not Reported 00:08:08.818 Non-Operational Permissive Mode: Not Supported 00:08:08.818 00:08:08.818 Health Information 00:08:08.818 ================== 00:08:08.818 Critical Warnings: 00:08:08.818 Available Spare Space: OK 00:08:08.818 Temperature: OK 00:08:08.818 Device Reliability: OK 00:08:08.818 Read Only: No 00:08:08.818 Volatile Memory Backup: OK 00:08:08.818 Current Temperature: 323 Kelvin (50 Celsius) 00:08:08.818 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:08.818 Available Spare: 0% 00:08:08.818 Available Spare Threshold: 0% 00:08:08.818 Life Percentage Used: 0% 00:08:08.818 Data Units Read: 2215 00:08:08.818 Data Units Written: 2002 00:08:08.818 Host Read Commands: 115227 00:08:08.818 Host Write Commands: 113496 00:08:08.818 Controller Busy Time: 0 minutes 00:08:08.818 Power Cycles: 0 00:08:08.818 Power On Hours: 0 hours 00:08:08.818 Unsafe Shutdowns: 0 00:08:08.818 Unrecoverable Media Errors: 0 00:08:08.818 Lifetime Error Log Entries: 0 00:08:08.818 Warning Temperature Time: 0 minutes 00:08:08.818 Critical Temperature Time: 0 minutes 00:08:08.818 00:08:08.818 Number of Queues 00:08:08.818 ================ 00:08:08.818 Number of I/O Submission Queues: 64 00:08:08.818 Number of I/O Completion Queues: 64 00:08:08.818 00:08:08.818 ZNS Specific Controller Data 00:08:08.818 ============================ 00:08:08.818 Zone Append Size Limit: 0 00:08:08.818 00:08:08.818 00:08:08.818 Active Namespaces 00:08:08.818 ================= 00:08:08.818 Namespace ID:1 00:08:08.818 Error Recovery Timeout: Unlimited 00:08:08.818 Command Set Identifier: NVM (00h) 00:08:08.818 Deallocate: Supported 00:08:08.818 Deallocated/Unwritten Error: Supported 00:08:08.818 Deallocated Read Value: All 0x00 00:08:08.818 Deallocate in Write Zeroes: Not Supported 00:08:08.818 Deallocated Guard Field: 0xFFFF 00:08:08.818 Flush: Supported 00:08:08.818 Reservation: Not Supported 00:08:08.818 Namespace Sharing Capabilities: Private 00:08:08.818 Size (in LBAs): 1048576 (4GiB) 00:08:08.818 Capacity (in LBAs): 1048576 (4GiB) 00:08:08.818 Utilization (in LBAs): 1048576 (4GiB) 00:08:08.818 Thin Provisioning: Not Supported 00:08:08.818 Per-NS Atomic Units: No 00:08:08.818 Maximum Single Source Range Length: 128 00:08:08.818 Maximum Copy Length: 128 00:08:08.818 Maximum Source Range Count: 128 00:08:08.818 NGUID/EUI64 Never Reused: No 00:08:08.818 Namespace Write Protected: No 00:08:08.818 Number of LBA Formats: 8 00:08:08.818 Current LBA Format: LBA Format #04 00:08:08.818 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:08.818 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:08.818 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:08.818 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:08.818 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:08.818 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:08.818 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:08.818 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:08.818 00:08:08.818 NVM Specific Namespace Data 00:08:08.818 =========================== 00:08:08.818 Logical Block Storage Tag Mask: 0 00:08:08.818 Protection Information Capabilities: 00:08:08.818 16b Guard Protection Information Storage Tag Support: No 00:08:08.818 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:08.818 Storage Tag Check Read Support: No 00:08:08.818 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.818 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.818 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.818 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.818 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.818 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.818 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.818 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.818 Namespace ID:2 00:08:08.818 Error Recovery Timeout: Unlimited 00:08:08.818 Command Set Identifier: NVM (00h) 00:08:08.818 Deallocate: Supported 00:08:08.818 Deallocated/Unwritten Error: Supported 00:08:08.818 Deallocated Read Value: All 0x00 00:08:08.818 Deallocate in Write Zeroes: Not Supported 00:08:08.818 Deallocated Guard Field: 0xFFFF 00:08:08.819 Flush: Supported 00:08:08.819 Reservation: Not Supported 00:08:08.819 Namespace Sharing Capabilities: Private 00:08:08.819 Size (in LBAs): 1048576 (4GiB) 00:08:08.819 Capacity (in LBAs): 1048576 (4GiB) 00:08:08.819 Utilization (in LBAs): 1048576 (4GiB) 00:08:08.819 Thin Provisioning: Not Supported 00:08:08.819 Per-NS Atomic Units: No 00:08:08.819 Maximum Single Source Range Length: 128 00:08:08.819 Maximum Copy Length: 128 00:08:08.819 Maximum Source Range Count: 128 00:08:08.819 NGUID/EUI64 Never Reused: No 00:08:08.819 Namespace Write Protected: No 00:08:08.819 Number of LBA Formats: 8 00:08:08.819 Current LBA Format: LBA Format #04 00:08:08.819 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:08.819 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:08.819 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:08.819 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:08.819 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:08.819 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:08.819 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:08.819 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:08.819 00:08:08.819 NVM Specific Namespace Data 00:08:08.819 =========================== 00:08:08.819 Logical Block Storage Tag Mask: 0 00:08:08.819 Protection Information Capabilities: 00:08:08.819 16b Guard Protection Information Storage Tag Support: No 00:08:08.819 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:08.819 Storage Tag Check Read Support: No 00:08:08.819 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Namespace ID:3 00:08:08.819 Error Recovery Timeout: Unlimited 00:08:08.819 Command Set Identifier: NVM (00h) 00:08:08.819 Deallocate: Supported 00:08:08.819 Deallocated/Unwritten Error: Supported 00:08:08.819 Deallocated Read Value: All 0x00 00:08:08.819 Deallocate in Write Zeroes: Not Supported 00:08:08.819 Deallocated Guard Field: 0xFFFF 00:08:08.819 Flush: Supported 00:08:08.819 Reservation: Not Supported 00:08:08.819 Namespace Sharing Capabilities: Private 00:08:08.819 Size (in LBAs): 1048576 (4GiB) 00:08:08.819 Capacity (in LBAs): 1048576 (4GiB) 00:08:08.819 Utilization (in LBAs): 1048576 (4GiB) 00:08:08.819 Thin Provisioning: Not Supported 00:08:08.819 Per-NS Atomic Units: No 00:08:08.819 Maximum Single Source Range Length: 128 00:08:08.819 Maximum Copy Length: 128 00:08:08.819 Maximum Source Range Count: 128 00:08:08.819 NGUID/EUI64 Never Reused: No 00:08:08.819 Namespace Write Protected: No 00:08:08.819 Number of LBA Formats: 8 00:08:08.819 Current LBA Format: LBA Format #04 00:08:08.819 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:08.819 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:08.819 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:08.819 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:08.819 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:08.819 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:08.819 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:08.819 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:08.819 00:08:08.819 NVM Specific Namespace Data 00:08:08.819 =========================== 00:08:08.819 Logical Block Storage Tag Mask: 0 00:08:08.819 Protection Information Capabilities: 00:08:08.819 16b Guard Protection Information Storage Tag Support: No 00:08:08.819 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:08.819 Storage Tag Check Read Support: No 00:08:08.819 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:08.819 08:52:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:08.819 08:52:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:09.079 ===================================================== 00:08:09.079 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:09.079 ===================================================== 00:08:09.079 Controller Capabilities/Features 00:08:09.079 ================================ 00:08:09.079 Vendor ID: 1b36 00:08:09.079 Subsystem Vendor ID: 1af4 00:08:09.079 Serial Number: 12343 00:08:09.079 Model Number: QEMU NVMe Ctrl 00:08:09.079 Firmware Version: 8.0.0 00:08:09.079 Recommended Arb Burst: 6 00:08:09.079 IEEE OUI Identifier: 00 54 52 00:08:09.079 Multi-path I/O 00:08:09.079 May have multiple subsystem ports: No 00:08:09.079 May have multiple controllers: Yes 00:08:09.079 Associated with SR-IOV VF: No 00:08:09.079 Max Data Transfer Size: 524288 00:08:09.079 Max Number of Namespaces: 256 00:08:09.079 Max Number of I/O Queues: 64 00:08:09.079 NVMe Specification Version (VS): 1.4 00:08:09.079 NVMe Specification Version (Identify): 1.4 00:08:09.079 Maximum Queue Entries: 2048 00:08:09.079 Contiguous Queues Required: Yes 00:08:09.079 Arbitration Mechanisms Supported 00:08:09.079 Weighted Round Robin: Not Supported 00:08:09.079 Vendor Specific: Not Supported 00:08:09.079 Reset Timeout: 7500 ms 00:08:09.079 Doorbell Stride: 4 bytes 00:08:09.079 NVM Subsystem Reset: Not Supported 00:08:09.079 Command Sets Supported 00:08:09.079 NVM Command Set: Supported 00:08:09.079 Boot Partition: Not Supported 00:08:09.079 Memory Page Size Minimum: 4096 bytes 00:08:09.079 Memory Page Size Maximum: 65536 bytes 00:08:09.079 Persistent Memory Region: Not Supported 00:08:09.079 Optional Asynchronous Events Supported 00:08:09.079 Namespace Attribute Notices: Supported 00:08:09.079 Firmware Activation Notices: Not Supported 00:08:09.079 ANA Change Notices: Not Supported 00:08:09.079 PLE Aggregate Log Change Notices: Not Supported 00:08:09.079 LBA Status Info Alert Notices: Not Supported 00:08:09.079 EGE Aggregate Log Change Notices: Not Supported 00:08:09.079 Normal NVM Subsystem Shutdown event: Not Supported 00:08:09.079 Zone Descriptor Change Notices: Not Supported 00:08:09.079 Discovery Log Change Notices: Not Supported 00:08:09.079 Controller Attributes 00:08:09.079 128-bit Host Identifier: Not Supported 00:08:09.079 Non-Operational Permissive Mode: Not Supported 00:08:09.079 NVM Sets: Not Supported 00:08:09.079 Read Recovery Levels: Not Supported 00:08:09.079 Endurance Groups: Supported 00:08:09.079 Predictable Latency Mode: Not Supported 00:08:09.079 Traffic Based Keep ALive: Not Supported 00:08:09.079 Namespace Granularity: Not Supported 00:08:09.079 SQ Associations: Not Supported 00:08:09.079 UUID List: Not Supported 00:08:09.079 Multi-Domain Subsystem: Not Supported 00:08:09.079 Fixed Capacity Management: Not Supported 00:08:09.079 Variable Capacity Management: Not Supported 00:08:09.079 Delete Endurance Group: Not Supported 00:08:09.079 Delete NVM Set: Not Supported 00:08:09.079 Extended LBA Formats Supported: Supported 00:08:09.079 Flexible Data Placement Supported: Supported 00:08:09.079 00:08:09.079 Controller Memory Buffer Support 00:08:09.079 ================================ 00:08:09.079 Supported: No 00:08:09.079 00:08:09.079 Persistent Memory Region Support 00:08:09.079 ================================ 00:08:09.079 Supported: No 00:08:09.079 00:08:09.079 Admin Command Set Attributes 00:08:09.079 ============================ 00:08:09.079 Security Send/Receive: Not Supported 00:08:09.079 Format NVM: Supported 00:08:09.079 Firmware Activate/Download: Not Supported 00:08:09.079 Namespace Management: Supported 00:08:09.079 Device Self-Test: Not Supported 00:08:09.079 Directives: Supported 00:08:09.079 NVMe-MI: Not Supported 00:08:09.079 Virtualization Management: Not Supported 00:08:09.079 Doorbell Buffer Config: Supported 00:08:09.079 Get LBA Status Capability: Not Supported 00:08:09.079 Command & Feature Lockdown Capability: Not Supported 00:08:09.079 Abort Command Limit: 4 00:08:09.079 Async Event Request Limit: 4 00:08:09.079 Number of Firmware Slots: N/A 00:08:09.079 Firmware Slot 1 Read-Only: N/A 00:08:09.079 Firmware Activation Without Reset: N/A 00:08:09.079 Multiple Update Detection Support: N/A 00:08:09.079 Firmware Update Granularity: No Information Provided 00:08:09.079 Per-Namespace SMART Log: Yes 00:08:09.079 Asymmetric Namespace Access Log Page: Not Supported 00:08:09.079 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:09.079 Command Effects Log Page: Supported 00:08:09.079 Get Log Page Extended Data: Supported 00:08:09.079 Telemetry Log Pages: Not Supported 00:08:09.079 Persistent Event Log Pages: Not Supported 00:08:09.079 Supported Log Pages Log Page: May Support 00:08:09.079 Commands Supported & Effects Log Page: Not Supported 00:08:09.079 Feature Identifiers & Effects Log Page:May Support 00:08:09.079 NVMe-MI Commands & Effects Log Page: May Support 00:08:09.079 Data Area 4 for Telemetry Log: Not Supported 00:08:09.079 Error Log Page Entries Supported: 1 00:08:09.079 Keep Alive: Not Supported 00:08:09.080 00:08:09.080 NVM Command Set Attributes 00:08:09.080 ========================== 00:08:09.080 Submission Queue Entry Size 00:08:09.080 Max: 64 00:08:09.080 Min: 64 00:08:09.080 Completion Queue Entry Size 00:08:09.080 Max: 16 00:08:09.080 Min: 16 00:08:09.080 Number of Namespaces: 256 00:08:09.080 Compare Command: Supported 00:08:09.080 Write Uncorrectable Command: Not Supported 00:08:09.080 Dataset Management Command: Supported 00:08:09.080 Write Zeroes Command: Supported 00:08:09.080 Set Features Save Field: Supported 00:08:09.080 Reservations: Not Supported 00:08:09.080 Timestamp: Supported 00:08:09.080 Copy: Supported 00:08:09.080 Volatile Write Cache: Present 00:08:09.080 Atomic Write Unit (Normal): 1 00:08:09.080 Atomic Write Unit (PFail): 1 00:08:09.080 Atomic Compare & Write Unit: 1 00:08:09.080 Fused Compare & Write: Not Supported 00:08:09.080 Scatter-Gather List 00:08:09.080 SGL Command Set: Supported 00:08:09.080 SGL Keyed: Not Supported 00:08:09.080 SGL Bit Bucket Descriptor: Not Supported 00:08:09.080 SGL Metadata Pointer: Not Supported 00:08:09.080 Oversized SGL: Not Supported 00:08:09.080 SGL Metadata Address: Not Supported 00:08:09.080 SGL Offset: Not Supported 00:08:09.080 Transport SGL Data Block: Not Supported 00:08:09.080 Replay Protected Memory Block: Not Supported 00:08:09.080 00:08:09.080 Firmware Slot Information 00:08:09.080 ========================= 00:08:09.080 Active slot: 1 00:08:09.080 Slot 1 Firmware Revision: 1.0 00:08:09.080 00:08:09.080 00:08:09.080 Commands Supported and Effects 00:08:09.080 ============================== 00:08:09.080 Admin Commands 00:08:09.080 -------------- 00:08:09.080 Delete I/O Submission Queue (00h): Supported 00:08:09.080 Create I/O Submission Queue (01h): Supported 00:08:09.080 Get Log Page (02h): Supported 00:08:09.080 Delete I/O Completion Queue (04h): Supported 00:08:09.080 Create I/O Completion Queue (05h): Supported 00:08:09.080 Identify (06h): Supported 00:08:09.080 Abort (08h): Supported 00:08:09.080 Set Features (09h): Supported 00:08:09.080 Get Features (0Ah): Supported 00:08:09.080 Asynchronous Event Request (0Ch): Supported 00:08:09.080 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:09.080 Directive Send (19h): Supported 00:08:09.080 Directive Receive (1Ah): Supported 00:08:09.080 Virtualization Management (1Ch): Supported 00:08:09.080 Doorbell Buffer Config (7Ch): Supported 00:08:09.080 Format NVM (80h): Supported LBA-Change 00:08:09.080 I/O Commands 00:08:09.080 ------------ 00:08:09.080 Flush (00h): Supported LBA-Change 00:08:09.080 Write (01h): Supported LBA-Change 00:08:09.080 Read (02h): Supported 00:08:09.080 Compare (05h): Supported 00:08:09.080 Write Zeroes (08h): Supported LBA-Change 00:08:09.080 Dataset Management (09h): Supported LBA-Change 00:08:09.080 Unknown (0Ch): Supported 00:08:09.080 Unknown (12h): Supported 00:08:09.080 Copy (19h): Supported LBA-Change 00:08:09.080 Unknown (1Dh): Supported LBA-Change 00:08:09.080 00:08:09.080 Error Log 00:08:09.080 ========= 00:08:09.080 00:08:09.080 Arbitration 00:08:09.080 =========== 00:08:09.080 Arbitration Burst: no limit 00:08:09.080 00:08:09.080 Power Management 00:08:09.080 ================ 00:08:09.080 Number of Power States: 1 00:08:09.080 Current Power State: Power State #0 00:08:09.080 Power State #0: 00:08:09.080 Max Power: 25.00 W 00:08:09.080 Non-Operational State: Operational 00:08:09.080 Entry Latency: 16 microseconds 00:08:09.080 Exit Latency: 4 microseconds 00:08:09.080 Relative Read Throughput: 0 00:08:09.080 Relative Read Latency: 0 00:08:09.080 Relative Write Throughput: 0 00:08:09.080 Relative Write Latency: 0 00:08:09.080 Idle Power: Not Reported 00:08:09.080 Active Power: Not Reported 00:08:09.080 Non-Operational Permissive Mode: Not Supported 00:08:09.080 00:08:09.080 Health Information 00:08:09.080 ================== 00:08:09.080 Critical Warnings: 00:08:09.080 Available Spare Space: OK 00:08:09.080 Temperature: OK 00:08:09.080 Device Reliability: OK 00:08:09.080 Read Only: No 00:08:09.080 Volatile Memory Backup: OK 00:08:09.080 Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.080 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:09.080 Available Spare: 0% 00:08:09.080 Available Spare Threshold: 0% 00:08:09.080 Life Percentage Used: 0% 00:08:09.080 Data Units Read: 947 00:08:09.080 Data Units Written: 876 00:08:09.080 Host Read Commands: 40176 00:08:09.080 Host Write Commands: 39600 00:08:09.080 Controller Busy Time: 0 minutes 00:08:09.080 Power Cycles: 0 00:08:09.080 Power On Hours: 0 hours 00:08:09.080 Unsafe Shutdowns: 0 00:08:09.080 Unrecoverable Media Errors: 0 00:08:09.080 Lifetime Error Log Entries: 0 00:08:09.080 Warning Temperature Time: 0 minutes 00:08:09.080 Critical Temperature Time: 0 minutes 00:08:09.080 00:08:09.080 Number of Queues 00:08:09.080 ================ 00:08:09.080 Number of I/O Submission Queues: 64 00:08:09.080 Number of I/O Completion Queues: 64 00:08:09.080 00:08:09.080 ZNS Specific Controller Data 00:08:09.080 ============================ 00:08:09.080 Zone Append Size Limit: 0 00:08:09.080 00:08:09.080 00:08:09.080 Active Namespaces 00:08:09.080 ================= 00:08:09.080 Namespace ID:1 00:08:09.080 Error Recovery Timeout: Unlimited 00:08:09.080 Command Set Identifier: NVM (00h) 00:08:09.080 Deallocate: Supported 00:08:09.080 Deallocated/Unwritten Error: Supported 00:08:09.080 Deallocated Read Value: All 0x00 00:08:09.080 Deallocate in Write Zeroes: Not Supported 00:08:09.080 Deallocated Guard Field: 0xFFFF 00:08:09.080 Flush: Supported 00:08:09.080 Reservation: Not Supported 00:08:09.080 Namespace Sharing Capabilities: Multiple Controllers 00:08:09.080 Size (in LBAs): 262144 (1GiB) 00:08:09.080 Capacity (in LBAs): 262144 (1GiB) 00:08:09.080 Utilization (in LBAs): 262144 (1GiB) 00:08:09.080 Thin Provisioning: Not Supported 00:08:09.080 Per-NS Atomic Units: No 00:08:09.080 Maximum Single Source Range Length: 128 00:08:09.080 Maximum Copy Length: 128 00:08:09.080 Maximum Source Range Count: 128 00:08:09.080 NGUID/EUI64 Never Reused: No 00:08:09.080 Namespace Write Protected: No 00:08:09.080 Endurance group ID: 1 00:08:09.080 Number of LBA Formats: 8 00:08:09.080 Current LBA Format: LBA Format #04 00:08:09.080 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:09.080 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:09.080 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:09.080 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:09.080 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:09.080 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:09.080 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:09.080 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:09.080 00:08:09.080 Get Feature FDP: 00:08:09.080 ================ 00:08:09.080 Enabled: Yes 00:08:09.080 FDP configuration index: 0 00:08:09.080 00:08:09.080 FDP configurations log page 00:08:09.080 =========================== 00:08:09.080 Number of FDP configurations: 1 00:08:09.080 Version: 0 00:08:09.080 Size: 112 00:08:09.080 FDP Configuration Descriptor: 0 00:08:09.080 Descriptor Size: 96 00:08:09.080 Reclaim Group Identifier format: 2 00:08:09.080 FDP Volatile Write Cache: Not Present 00:08:09.080 FDP Configuration: Valid 00:08:09.080 Vendor Specific Size: 0 00:08:09.080 Number of Reclaim Groups: 2 00:08:09.080 Number of Recalim Unit Handles: 8 00:08:09.080 Max Placement Identifiers: 128 00:08:09.080 Number of Namespaces Suppprted: 256 00:08:09.080 Reclaim unit Nominal Size: 6000000 bytes 00:08:09.080 Estimated Reclaim Unit Time Limit: Not Reported 00:08:09.080 RUH Desc #000: RUH Type: Initially Isolated 00:08:09.080 RUH Desc #001: RUH Type: Initially Isolated 00:08:09.080 RUH Desc #002: RUH Type: Initially Isolated 00:08:09.080 RUH Desc #003: RUH Type: Initially Isolated 00:08:09.080 RUH Desc #004: RUH Type: Initially Isolated 00:08:09.080 RUH Desc #005: RUH Type: Initially Isolated 00:08:09.080 RUH Desc #006: RUH Type: Initially Isolated 00:08:09.080 RUH Desc #007: RUH Type: Initially Isolated 00:08:09.080 00:08:09.080 FDP reclaim unit handle usage log page 00:08:09.080 ====================================== 00:08:09.080 Number of Reclaim Unit Handles: 8 00:08:09.080 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:09.080 RUH Usage Desc #001: RUH Attributes: Unused 00:08:09.080 RUH Usage Desc #002: RUH Attributes: Unused 00:08:09.080 RUH Usage Desc #003: RUH Attributes: Unused 00:08:09.080 RUH Usage Desc #004: RUH Attributes: Unused 00:08:09.080 RUH Usage Desc #005: RUH Attributes: Unused 00:08:09.080 RUH Usage Desc #006: RUH Attributes: Unused 00:08:09.080 RUH Usage Desc #007: RUH Attributes: Unused 00:08:09.080 00:08:09.080 FDP statistics log page 00:08:09.080 ======================= 00:08:09.080 Host bytes with metadata written: 544251904 00:08:09.080 Media bytes with metadata written: 544321536 00:08:09.080 Media bytes erased: 0 00:08:09.080 00:08:09.080 FDP events log page 00:08:09.080 =================== 00:08:09.080 Number of FDP events: 0 00:08:09.080 00:08:09.080 NVM Specific Namespace Data 00:08:09.080 =========================== 00:08:09.080 Logical Block Storage Tag Mask: 0 00:08:09.080 Protection Information Capabilities: 00:08:09.080 16b Guard Protection Information Storage Tag Support: No 00:08:09.080 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:09.080 Storage Tag Check Read Support: No 00:08:09.080 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.080 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.080 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.080 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.080 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.080 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.080 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.080 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.080 00:08:09.080 real 0m1.118s 00:08:09.080 user 0m0.355s 00:08:09.080 sys 0m0.500s 00:08:09.080 08:52:03 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.080 08:52:03 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:09.080 ************************************ 00:08:09.080 END TEST nvme_identify 00:08:09.080 ************************************ 00:08:09.080 08:52:03 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:09.080 08:52:03 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.080 08:52:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.080 08:52:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.080 ************************************ 00:08:09.080 START TEST nvme_perf 00:08:09.080 ************************************ 00:08:09.080 08:52:03 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:09.080 08:52:03 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:10.468 Initializing NVMe Controllers 00:08:10.468 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:10.468 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:10.468 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:10.468 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:10.468 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:10.468 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:10.468 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:10.468 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:10.468 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:10.468 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:10.468 Initialization complete. Launching workers. 00:08:10.468 ======================================================== 00:08:10.468 Latency(us) 00:08:10.468 Device Information : IOPS MiB/s Average min max 00:08:10.468 PCIE (0000:00:10.0) NSID 1 from core 0: 11524.26 135.05 11100.05 6594.96 39164.44 00:08:10.468 PCIE (0000:00:11.0) NSID 1 from core 0: 11524.26 135.05 11072.11 5763.14 37589.96 00:08:10.468 PCIE (0000:00:13.0) NSID 1 from core 0: 11524.26 135.05 11054.86 4525.12 36117.93 00:08:10.468 PCIE (0000:00:12.0) NSID 1 from core 0: 11524.26 135.05 11045.33 4216.78 36206.61 00:08:10.468 PCIE (0000:00:12.0) NSID 2 from core 0: 11524.26 135.05 11035.92 3893.43 36382.38 00:08:10.468 PCIE (0000:00:12.0) NSID 3 from core 0: 11587.93 135.80 10965.97 3453.62 28983.99 00:08:10.468 ======================================================== 00:08:10.468 Total : 69209.22 811.05 11045.63 3453.62 39164.44 00:08:10.468 00:08:10.468 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:10.468 ================================================================================= 00:08:10.468 1.00000% : 7410.609us 00:08:10.468 10.00000% : 8166.794us 00:08:10.468 25.00000% : 8973.391us 00:08:10.468 50.00000% : 10485.760us 00:08:10.468 75.00000% : 12451.840us 00:08:10.468 90.00000% : 14619.569us 00:08:10.468 95.00000% : 15325.342us 00:08:10.468 98.00000% : 16535.237us 00:08:10.468 99.00000% : 30247.385us 00:08:10.468 99.50000% : 37305.108us 00:08:10.468 99.90000% : 38918.302us 00:08:10.468 99.99000% : 39119.951us 00:08:10.468 99.99900% : 39321.600us 00:08:10.468 99.99990% : 39321.600us 00:08:10.468 99.99999% : 39321.600us 00:08:10.468 00:08:10.468 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:10.468 ================================================================================= 00:08:10.469 1.00000% : 7511.434us 00:08:10.469 10.00000% : 8166.794us 00:08:10.469 25.00000% : 8973.391us 00:08:10.469 50.00000% : 10536.172us 00:08:10.469 75.00000% : 12451.840us 00:08:10.469 90.00000% : 14619.569us 00:08:10.469 95.00000% : 15224.517us 00:08:10.469 98.00000% : 16535.237us 00:08:10.469 99.00000% : 28432.542us 00:08:10.469 99.50000% : 35288.615us 00:08:10.469 99.90000% : 37305.108us 00:08:10.469 99.99000% : 37708.406us 00:08:10.469 99.99900% : 37708.406us 00:08:10.469 99.99990% : 37708.406us 00:08:10.469 99.99999% : 37708.406us 00:08:10.469 00:08:10.469 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:10.469 ================================================================================= 00:08:10.469 1.00000% : 7410.609us 00:08:10.469 10.00000% : 8166.794us 00:08:10.469 25.00000% : 8973.391us 00:08:10.469 50.00000% : 10536.172us 00:08:10.469 75.00000% : 12502.252us 00:08:10.469 90.00000% : 14518.745us 00:08:10.469 95.00000% : 15325.342us 00:08:10.469 98.00000% : 16434.412us 00:08:10.469 99.00000% : 28835.840us 00:08:10.469 99.50000% : 34683.668us 00:08:10.469 99.90000% : 35893.563us 00:08:10.469 99.99000% : 36095.212us 00:08:10.469 99.99900% : 36296.862us 00:08:10.469 99.99990% : 36296.862us 00:08:10.469 99.99999% : 36296.862us 00:08:10.469 00:08:10.469 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:10.469 ================================================================================= 00:08:10.469 1.00000% : 7461.022us 00:08:10.469 10.00000% : 8217.206us 00:08:10.469 25.00000% : 9023.803us 00:08:10.469 50.00000% : 10485.760us 00:08:10.469 75.00000% : 12502.252us 00:08:10.469 90.00000% : 14518.745us 00:08:10.469 95.00000% : 15426.166us 00:08:10.469 98.00000% : 16434.412us 00:08:10.469 99.00000% : 28432.542us 00:08:10.469 99.50000% : 34683.668us 00:08:10.469 99.90000% : 36095.212us 00:08:10.469 99.99000% : 36296.862us 00:08:10.469 99.99900% : 36296.862us 00:08:10.469 99.99990% : 36296.862us 00:08:10.469 99.99999% : 36296.862us 00:08:10.469 00:08:10.469 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:10.469 ================================================================================= 00:08:10.469 1.00000% : 7410.609us 00:08:10.469 10.00000% : 8166.794us 00:08:10.469 25.00000% : 9023.803us 00:08:10.469 50.00000% : 10485.760us 00:08:10.469 75.00000% : 12552.665us 00:08:10.469 90.00000% : 14417.920us 00:08:10.469 95.00000% : 15426.166us 00:08:10.469 98.00000% : 16535.237us 00:08:10.469 99.00000% : 28029.243us 00:08:10.469 99.50000% : 34885.317us 00:08:10.469 99.90000% : 36095.212us 00:08:10.469 99.99000% : 36498.511us 00:08:10.469 99.99900% : 36498.511us 00:08:10.469 99.99990% : 36498.511us 00:08:10.469 99.99999% : 36498.511us 00:08:10.469 00:08:10.469 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:10.469 ================================================================================= 00:08:10.469 1.00000% : 7410.609us 00:08:10.469 10.00000% : 8166.794us 00:08:10.469 25.00000% : 8973.391us 00:08:10.469 50.00000% : 10485.760us 00:08:10.469 75.00000% : 12552.665us 00:08:10.469 90.00000% : 14518.745us 00:08:10.469 95.00000% : 15325.342us 00:08:10.469 98.00000% : 16736.886us 00:08:10.469 99.00000% : 22282.240us 00:08:10.469 99.50000% : 28230.892us 00:08:10.469 99.90000% : 28835.840us 00:08:10.469 99.99000% : 29037.489us 00:08:10.469 99.99900% : 29037.489us 00:08:10.469 99.99990% : 29037.489us 00:08:10.469 99.99999% : 29037.489us 00:08:10.469 00:08:10.469 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:10.469 ============================================================================== 00:08:10.469 Range in us Cumulative IO count 00:08:10.469 6553.600 - 6604.012: 0.0173% ( 2) 00:08:10.469 6604.012 - 6654.425: 0.0432% ( 3) 00:08:10.469 6654.425 - 6704.837: 0.0777% ( 4) 00:08:10.469 6704.837 - 6755.249: 0.1122% ( 4) 00:08:10.469 6755.249 - 6805.662: 0.1381% ( 3) 00:08:10.469 6805.662 - 6856.074: 0.1727% ( 4) 00:08:10.469 6856.074 - 6906.486: 0.1985% ( 3) 00:08:10.469 6906.486 - 6956.898: 0.2331% ( 4) 00:08:10.469 6956.898 - 7007.311: 0.2762% ( 5) 00:08:10.469 7007.311 - 7057.723: 0.2935% ( 2) 00:08:10.469 7057.723 - 7108.135: 0.3712% ( 9) 00:08:10.469 7108.135 - 7158.548: 0.4230% ( 6) 00:08:10.469 7158.548 - 7208.960: 0.5439% ( 14) 00:08:10.469 7208.960 - 7259.372: 0.6474% ( 12) 00:08:10.469 7259.372 - 7309.785: 0.7856% ( 16) 00:08:10.469 7309.785 - 7360.197: 0.9151% ( 15) 00:08:10.469 7360.197 - 7410.609: 1.1395% ( 26) 00:08:10.469 7410.609 - 7461.022: 1.5021% ( 42) 00:08:10.469 7461.022 - 7511.434: 1.7093% ( 24) 00:08:10.469 7511.434 - 7561.846: 2.0805% ( 43) 00:08:10.469 7561.846 - 7612.258: 2.4085% ( 38) 00:08:10.469 7612.258 - 7662.671: 2.8401% ( 50) 00:08:10.469 7662.671 - 7713.083: 3.2545% ( 48) 00:08:10.469 7713.083 - 7763.495: 3.9019% ( 75) 00:08:10.469 7763.495 - 7813.908: 4.5753% ( 78) 00:08:10.469 7813.908 - 7864.320: 5.2400% ( 77) 00:08:10.469 7864.320 - 7914.732: 6.0342% ( 92) 00:08:10.469 7914.732 - 7965.145: 6.8284% ( 92) 00:08:10.469 7965.145 - 8015.557: 7.6398% ( 94) 00:08:10.469 8015.557 - 8065.969: 8.5290% ( 103) 00:08:10.469 8065.969 - 8116.382: 9.4009% ( 101) 00:08:10.469 8116.382 - 8166.794: 10.3591% ( 111) 00:08:10.469 8166.794 - 8217.206: 11.2828% ( 107) 00:08:10.469 8217.206 - 8267.618: 12.1979% ( 106) 00:08:10.469 8267.618 - 8318.031: 13.2683% ( 124) 00:08:10.469 8318.031 - 8368.443: 14.3215% ( 122) 00:08:10.469 8368.443 - 8418.855: 15.2970% ( 113) 00:08:10.469 8418.855 - 8469.268: 16.1689% ( 101) 00:08:10.469 8469.268 - 8519.680: 17.2825% ( 129) 00:08:10.469 8519.680 - 8570.092: 18.3097% ( 119) 00:08:10.469 8570.092 - 8620.505: 19.2593% ( 110) 00:08:10.469 8620.505 - 8670.917: 20.1916% ( 108) 00:08:10.469 8670.917 - 8721.329: 21.1326% ( 109) 00:08:10.469 8721.329 - 8771.742: 22.0304% ( 104) 00:08:10.469 8771.742 - 8822.154: 22.9109% ( 102) 00:08:10.469 8822.154 - 8872.566: 23.5670% ( 76) 00:08:10.469 8872.566 - 8922.978: 24.2835% ( 83) 00:08:10.469 8922.978 - 8973.391: 25.0000% ( 83) 00:08:10.469 8973.391 - 9023.803: 25.7510% ( 87) 00:08:10.469 9023.803 - 9074.215: 26.4503% ( 81) 00:08:10.469 9074.215 - 9124.628: 27.0718% ( 72) 00:08:10.469 9124.628 - 9175.040: 27.8919% ( 95) 00:08:10.469 9175.040 - 9225.452: 28.7293% ( 97) 00:08:10.469 9225.452 - 9275.865: 29.4544% ( 84) 00:08:10.469 9275.865 - 9326.277: 30.1623% ( 82) 00:08:10.469 9326.277 - 9376.689: 30.8443% ( 79) 00:08:10.469 9376.689 - 9427.102: 31.5953% ( 87) 00:08:10.469 9427.102 - 9477.514: 32.3463% ( 87) 00:08:10.469 9477.514 - 9527.926: 32.9938% ( 75) 00:08:10.469 9527.926 - 9578.338: 33.7362% ( 86) 00:08:10.469 9578.338 - 9628.751: 34.5477% ( 94) 00:08:10.469 9628.751 - 9679.163: 35.4886% ( 109) 00:08:10.469 9679.163 - 9729.575: 36.2742% ( 91) 00:08:10.469 9729.575 - 9779.988: 37.0770% ( 93) 00:08:10.469 9779.988 - 9830.400: 37.8971% ( 95) 00:08:10.469 9830.400 - 9880.812: 38.8553% ( 111) 00:08:10.469 9880.812 - 9931.225: 39.7531% ( 104) 00:08:10.469 9931.225 - 9981.637: 40.6509% ( 104) 00:08:10.469 9981.637 - 10032.049: 41.7041% ( 122) 00:08:10.469 10032.049 - 10082.462: 42.5587% ( 99) 00:08:10.469 10082.462 - 10132.874: 43.7327% ( 136) 00:08:10.469 10132.874 - 10183.286: 44.7600% ( 119) 00:08:10.469 10183.286 - 10233.698: 45.6146% ( 99) 00:08:10.469 10233.698 - 10284.111: 46.5815% ( 112) 00:08:10.469 10284.111 - 10334.523: 47.7987% ( 141) 00:08:10.469 10334.523 - 10384.935: 48.7137% ( 106) 00:08:10.469 10384.935 - 10435.348: 49.6029% ( 103) 00:08:10.469 10435.348 - 10485.760: 50.5093% ( 105) 00:08:10.469 10485.760 - 10536.172: 51.4071% ( 104) 00:08:10.469 10536.172 - 10586.585: 52.2272% ( 95) 00:08:10.469 10586.585 - 10636.997: 53.0818% ( 99) 00:08:10.469 10636.997 - 10687.409: 53.9019% ( 95) 00:08:10.469 10687.409 - 10737.822: 54.7652% ( 100) 00:08:10.469 10737.822 - 10788.234: 55.4817% ( 83) 00:08:10.469 10788.234 - 10838.646: 56.1464% ( 77) 00:08:10.469 10838.646 - 10889.058: 57.0269% ( 102) 00:08:10.469 10889.058 - 10939.471: 57.9161% ( 103) 00:08:10.469 10939.471 - 10989.883: 58.6844% ( 89) 00:08:10.469 10989.883 - 11040.295: 59.4613% ( 90) 00:08:10.469 11040.295 - 11090.708: 60.1433% ( 79) 00:08:10.469 11090.708 - 11141.120: 60.8253% ( 79) 00:08:10.469 11141.120 - 11191.532: 61.5763% ( 87) 00:08:10.469 11191.532 - 11241.945: 62.2151% ( 74) 00:08:10.469 11241.945 - 11292.357: 62.9575% ( 86) 00:08:10.469 11292.357 - 11342.769: 63.5186% ( 65) 00:08:10.469 11342.769 - 11393.182: 64.2179% ( 81) 00:08:10.469 11393.182 - 11443.594: 64.9344% ( 83) 00:08:10.469 11443.594 - 11494.006: 65.5214% ( 68) 00:08:10.469 11494.006 - 11544.418: 66.1171% ( 69) 00:08:10.469 11544.418 - 11594.831: 66.6350% ( 60) 00:08:10.469 11594.831 - 11645.243: 67.2825% ( 75) 00:08:10.469 11645.243 - 11695.655: 67.9040% ( 72) 00:08:10.469 11695.655 - 11746.068: 68.5773% ( 78) 00:08:10.469 11746.068 - 11796.480: 69.0867% ( 59) 00:08:10.469 11796.480 - 11846.892: 69.8377% ( 87) 00:08:10.469 11846.892 - 11897.305: 70.3816% ( 63) 00:08:10.469 11897.305 - 11947.717: 70.8305% ( 52) 00:08:10.469 11947.717 - 11998.129: 71.2794% ( 52) 00:08:10.469 11998.129 - 12048.542: 71.6765% ( 46) 00:08:10.469 12048.542 - 12098.954: 72.0994% ( 49) 00:08:10.469 12098.954 - 12149.366: 72.4879% ( 45) 00:08:10.470 12149.366 - 12199.778: 72.8936% ( 47) 00:08:10.470 12199.778 - 12250.191: 73.4030% ( 59) 00:08:10.470 12250.191 - 12300.603: 73.8432% ( 51) 00:08:10.470 12300.603 - 12351.015: 74.3526% ( 59) 00:08:10.470 12351.015 - 12401.428: 74.7583% ( 47) 00:08:10.470 12401.428 - 12451.840: 75.2072% ( 52) 00:08:10.470 12451.840 - 12502.252: 75.6388% ( 50) 00:08:10.470 12502.252 - 12552.665: 75.9841% ( 40) 00:08:10.470 12552.665 - 12603.077: 76.5625% ( 67) 00:08:10.470 12603.077 - 12653.489: 76.8819% ( 37) 00:08:10.470 12653.489 - 12703.902: 77.3222% ( 51) 00:08:10.470 12703.902 - 12754.314: 77.7193% ( 46) 00:08:10.470 12754.314 - 12804.726: 78.0300% ( 36) 00:08:10.470 12804.726 - 12855.138: 78.3494% ( 37) 00:08:10.470 12855.138 - 12905.551: 78.7552% ( 47) 00:08:10.470 12905.551 - 13006.375: 79.5321% ( 90) 00:08:10.470 13006.375 - 13107.200: 80.3954% ( 100) 00:08:10.470 13107.200 - 13208.025: 81.2673% ( 101) 00:08:10.470 13208.025 - 13308.849: 81.8974% ( 73) 00:08:10.470 13308.849 - 13409.674: 82.5535% ( 76) 00:08:10.470 13409.674 - 13510.498: 83.1233% ( 66) 00:08:10.470 13510.498 - 13611.323: 83.6671% ( 63) 00:08:10.470 13611.323 - 13712.148: 84.3059% ( 74) 00:08:10.470 13712.148 - 13812.972: 85.0915% ( 91) 00:08:10.470 13812.972 - 13913.797: 85.6958% ( 70) 00:08:10.470 13913.797 - 14014.622: 86.2655% ( 66) 00:08:10.470 14014.622 - 14115.446: 86.6799% ( 48) 00:08:10.470 14115.446 - 14216.271: 87.4396% ( 88) 00:08:10.470 14216.271 - 14317.095: 88.3028% ( 100) 00:08:10.470 14317.095 - 14417.920: 89.0884% ( 91) 00:08:10.470 14417.920 - 14518.745: 89.7617% ( 78) 00:08:10.470 14518.745 - 14619.569: 90.4782% ( 83) 00:08:10.470 14619.569 - 14720.394: 91.0653% ( 68) 00:08:10.470 14720.394 - 14821.218: 91.7818% ( 83) 00:08:10.470 14821.218 - 14922.043: 92.4551% ( 78) 00:08:10.470 14922.043 - 15022.868: 93.1026% ( 75) 00:08:10.470 15022.868 - 15123.692: 93.7327% ( 73) 00:08:10.470 15123.692 - 15224.517: 94.3802% ( 75) 00:08:10.470 15224.517 - 15325.342: 95.0017% ( 72) 00:08:10.470 15325.342 - 15426.166: 95.5110% ( 59) 00:08:10.470 15426.166 - 15526.991: 95.8391% ( 38) 00:08:10.470 15526.991 - 15627.815: 96.2103% ( 43) 00:08:10.470 15627.815 - 15728.640: 96.5383% ( 38) 00:08:10.470 15728.640 - 15829.465: 96.7887% ( 29) 00:08:10.470 15829.465 - 15930.289: 97.1685% ( 44) 00:08:10.470 15930.289 - 16031.114: 97.3498% ( 21) 00:08:10.470 16031.114 - 16131.938: 97.5052% ( 18) 00:08:10.470 16131.938 - 16232.763: 97.6260% ( 14) 00:08:10.470 16232.763 - 16333.588: 97.7814% ( 18) 00:08:10.470 16333.588 - 16434.412: 97.8591% ( 9) 00:08:10.470 16434.412 - 16535.237: 98.0231% ( 19) 00:08:10.470 16535.237 - 16636.062: 98.1440% ( 14) 00:08:10.470 16636.062 - 16736.886: 98.2821% ( 16) 00:08:10.470 16736.886 - 16837.711: 98.3857% ( 12) 00:08:10.470 16837.711 - 16938.535: 98.4807% ( 11) 00:08:10.470 16938.535 - 17039.360: 98.6533% ( 20) 00:08:10.470 17039.360 - 17140.185: 98.6619% ( 1) 00:08:10.470 17140.185 - 17241.009: 98.7137% ( 6) 00:08:10.470 17241.009 - 17341.834: 98.7396% ( 3) 00:08:10.470 17341.834 - 17442.658: 98.7828% ( 5) 00:08:10.470 17442.658 - 17543.483: 98.8346% ( 6) 00:08:10.470 17543.483 - 17644.308: 98.8950% ( 7) 00:08:10.470 29642.437 - 29844.086: 98.9037% ( 1) 00:08:10.470 29844.086 - 30045.735: 98.9727% ( 8) 00:08:10.470 30045.735 - 30247.385: 99.0159% ( 5) 00:08:10.470 30247.385 - 30449.034: 99.0763% ( 7) 00:08:10.470 30449.034 - 30650.683: 99.1281% ( 6) 00:08:10.470 30650.683 - 30852.332: 99.1799% ( 6) 00:08:10.470 30852.332 - 31053.982: 99.2317% ( 6) 00:08:10.470 31053.982 - 31255.631: 99.2835% ( 6) 00:08:10.470 31255.631 - 31457.280: 99.3353% ( 6) 00:08:10.470 31457.280 - 31658.929: 99.3871% ( 6) 00:08:10.470 31658.929 - 31860.578: 99.4475% ( 7) 00:08:10.470 36901.809 - 37103.458: 99.4648% ( 2) 00:08:10.470 37103.458 - 37305.108: 99.5166% ( 6) 00:08:10.470 37305.108 - 37506.757: 99.5684% ( 6) 00:08:10.470 37506.757 - 37708.406: 99.6202% ( 6) 00:08:10.470 37708.406 - 37910.055: 99.6806% ( 7) 00:08:10.470 37910.055 - 38111.705: 99.7324% ( 6) 00:08:10.470 38111.705 - 38313.354: 99.7842% ( 6) 00:08:10.470 38313.354 - 38515.003: 99.8446% ( 7) 00:08:10.470 38515.003 - 38716.652: 99.8964% ( 6) 00:08:10.470 38716.652 - 38918.302: 99.9396% ( 5) 00:08:10.470 38918.302 - 39119.951: 99.9914% ( 6) 00:08:10.470 39119.951 - 39321.600: 100.0000% ( 1) 00:08:10.470 00:08:10.470 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:10.470 ============================================================================== 00:08:10.470 Range in us Cumulative IO count 00:08:10.470 5747.003 - 5772.209: 0.0086% ( 1) 00:08:10.470 5772.209 - 5797.415: 0.0518% ( 5) 00:08:10.470 5797.415 - 5822.622: 0.0863% ( 4) 00:08:10.470 5822.622 - 5847.828: 0.0950% ( 1) 00:08:10.470 5847.828 - 5873.034: 0.1209% ( 3) 00:08:10.470 5873.034 - 5898.240: 0.1381% ( 2) 00:08:10.470 5898.240 - 5923.446: 0.1554% ( 2) 00:08:10.470 5923.446 - 5948.652: 0.1727% ( 2) 00:08:10.470 5948.652 - 5973.858: 0.1899% ( 2) 00:08:10.470 5973.858 - 5999.065: 0.2072% ( 2) 00:08:10.470 5999.065 - 6024.271: 0.2244% ( 2) 00:08:10.470 6024.271 - 6049.477: 0.2417% ( 2) 00:08:10.470 6049.477 - 6074.683: 0.2590% ( 2) 00:08:10.470 6074.683 - 6099.889: 0.2762% ( 2) 00:08:10.470 6099.889 - 6125.095: 0.2935% ( 2) 00:08:10.470 6125.095 - 6150.302: 0.3108% ( 2) 00:08:10.470 6150.302 - 6175.508: 0.3280% ( 2) 00:08:10.470 6175.508 - 6200.714: 0.3539% ( 3) 00:08:10.470 6200.714 - 6225.920: 0.3712% ( 2) 00:08:10.470 6225.920 - 6251.126: 0.3885% ( 2) 00:08:10.470 6251.126 - 6276.332: 0.3971% ( 1) 00:08:10.470 6276.332 - 6301.538: 0.4230% ( 3) 00:08:10.470 6301.538 - 6326.745: 0.4403% ( 2) 00:08:10.470 6326.745 - 6351.951: 0.4575% ( 2) 00:08:10.470 6351.951 - 6377.157: 0.4748% ( 2) 00:08:10.470 6377.157 - 6402.363: 0.4921% ( 2) 00:08:10.470 6402.363 - 6427.569: 0.5093% ( 2) 00:08:10.470 6427.569 - 6452.775: 0.5352% ( 3) 00:08:10.470 6452.775 - 6503.188: 0.5525% ( 2) 00:08:10.470 7057.723 - 7108.135: 0.5611% ( 1) 00:08:10.470 7108.135 - 7158.548: 0.5956% ( 4) 00:08:10.470 7158.548 - 7208.960: 0.6302% ( 4) 00:08:10.470 7208.960 - 7259.372: 0.6561% ( 3) 00:08:10.470 7259.372 - 7309.785: 0.6992% ( 5) 00:08:10.470 7309.785 - 7360.197: 0.7942% ( 11) 00:08:10.470 7360.197 - 7410.609: 0.8892% ( 11) 00:08:10.470 7410.609 - 7461.022: 0.9755% ( 10) 00:08:10.470 7461.022 - 7511.434: 1.1395% ( 19) 00:08:10.470 7511.434 - 7561.846: 1.3985% ( 30) 00:08:10.470 7561.846 - 7612.258: 1.7006% ( 35) 00:08:10.470 7612.258 - 7662.671: 2.1064% ( 47) 00:08:10.470 7662.671 - 7713.083: 2.6243% ( 60) 00:08:10.470 7713.083 - 7763.495: 3.1941% ( 66) 00:08:10.470 7763.495 - 7813.908: 3.8847% ( 80) 00:08:10.470 7813.908 - 7864.320: 4.5666% ( 79) 00:08:10.470 7864.320 - 7914.732: 5.3781% ( 94) 00:08:10.470 7914.732 - 7965.145: 6.3450% ( 112) 00:08:10.470 7965.145 - 8015.557: 7.3463% ( 116) 00:08:10.470 8015.557 - 8065.969: 8.2959% ( 110) 00:08:10.470 8065.969 - 8116.382: 9.2369% ( 109) 00:08:10.470 8116.382 - 8166.794: 10.2814% ( 121) 00:08:10.470 8166.794 - 8217.206: 11.4037% ( 130) 00:08:10.470 8217.206 - 8267.618: 12.4396% ( 120) 00:08:10.470 8267.618 - 8318.031: 13.5532% ( 129) 00:08:10.470 8318.031 - 8368.443: 14.6668% ( 129) 00:08:10.470 8368.443 - 8418.855: 15.7113% ( 121) 00:08:10.470 8418.855 - 8469.268: 16.7300% ( 118) 00:08:10.470 8469.268 - 8519.680: 17.7314% ( 116) 00:08:10.470 8519.680 - 8570.092: 18.7932% ( 123) 00:08:10.470 8570.092 - 8620.505: 19.7255% ( 108) 00:08:10.470 8620.505 - 8670.917: 20.8391% ( 129) 00:08:10.470 8670.917 - 8721.329: 21.7800% ( 109) 00:08:10.470 8721.329 - 8771.742: 22.7124% ( 108) 00:08:10.470 8771.742 - 8822.154: 23.5497% ( 97) 00:08:10.470 8822.154 - 8872.566: 24.3008% ( 87) 00:08:10.470 8872.566 - 8922.978: 24.9914% ( 80) 00:08:10.470 8922.978 - 8973.391: 25.7510% ( 88) 00:08:10.470 8973.391 - 9023.803: 26.4675% ( 83) 00:08:10.470 9023.803 - 9074.215: 27.1495% ( 79) 00:08:10.470 9074.215 - 9124.628: 27.8315% ( 79) 00:08:10.470 9124.628 - 9175.040: 28.5912% ( 88) 00:08:10.470 9175.040 - 9225.452: 29.2818% ( 80) 00:08:10.470 9225.452 - 9275.865: 29.9551% ( 78) 00:08:10.470 9275.865 - 9326.277: 30.5853% ( 73) 00:08:10.470 9326.277 - 9376.689: 31.2155% ( 73) 00:08:10.470 9376.689 - 9427.102: 31.8370% ( 72) 00:08:10.470 9427.102 - 9477.514: 32.5017% ( 77) 00:08:10.470 9477.514 - 9527.926: 33.1492% ( 75) 00:08:10.470 9527.926 - 9578.338: 33.8311% ( 79) 00:08:10.470 9578.338 - 9628.751: 34.5477% ( 83) 00:08:10.470 9628.751 - 9679.163: 35.4282% ( 102) 00:08:10.470 9679.163 - 9729.575: 36.1102% ( 79) 00:08:10.470 9729.575 - 9779.988: 36.8439% ( 85) 00:08:10.470 9779.988 - 9830.400: 37.6122% ( 89) 00:08:10.470 9830.400 - 9880.812: 38.3719% ( 88) 00:08:10.470 9880.812 - 9931.225: 39.2352% ( 100) 00:08:10.470 9931.225 - 9981.637: 40.1243% ( 103) 00:08:10.470 9981.637 - 10032.049: 41.1257% ( 116) 00:08:10.470 10032.049 - 10082.462: 42.0235% ( 104) 00:08:10.470 10082.462 - 10132.874: 43.0335% ( 117) 00:08:10.470 10132.874 - 10183.286: 43.9917% ( 111) 00:08:10.470 10183.286 - 10233.698: 44.9240% ( 108) 00:08:10.470 10233.698 - 10284.111: 45.8132% ( 103) 00:08:10.470 10284.111 - 10334.523: 46.7628% ( 110) 00:08:10.470 10334.523 - 10384.935: 47.8073% ( 121) 00:08:10.470 10384.935 - 10435.348: 48.7396% ( 108) 00:08:10.470 10435.348 - 10485.760: 49.7497% ( 117) 00:08:10.470 10485.760 - 10536.172: 50.5698% ( 95) 00:08:10.470 10536.172 - 10586.585: 51.4416% ( 101) 00:08:10.471 10586.585 - 10636.997: 52.3567% ( 106) 00:08:10.471 10636.997 - 10687.409: 53.3581% ( 116) 00:08:10.471 10687.409 - 10737.822: 54.4199% ( 123) 00:08:10.471 10737.822 - 10788.234: 55.2314% ( 94) 00:08:10.471 10788.234 - 10838.646: 56.1205% ( 103) 00:08:10.471 10838.646 - 10889.058: 57.0528% ( 108) 00:08:10.471 10889.058 - 10939.471: 57.8816% ( 96) 00:08:10.471 10939.471 - 10989.883: 58.6758% ( 92) 00:08:10.471 10989.883 - 11040.295: 59.4872% ( 94) 00:08:10.471 11040.295 - 11090.708: 60.2642% ( 90) 00:08:10.471 11090.708 - 11141.120: 61.0066% ( 86) 00:08:10.471 11141.120 - 11191.532: 61.7576% ( 87) 00:08:10.471 11191.532 - 11241.945: 62.5604% ( 93) 00:08:10.471 11241.945 - 11292.357: 63.3546% ( 92) 00:08:10.471 11292.357 - 11342.769: 64.0711% ( 83) 00:08:10.471 11342.769 - 11393.182: 64.7790% ( 82) 00:08:10.471 11393.182 - 11443.594: 65.4265% ( 75) 00:08:10.471 11443.594 - 11494.006: 66.0566% ( 73) 00:08:10.471 11494.006 - 11544.418: 66.6350% ( 67) 00:08:10.471 11544.418 - 11594.831: 67.2220% ( 68) 00:08:10.471 11594.831 - 11645.243: 67.7400% ( 60) 00:08:10.471 11645.243 - 11695.655: 68.1975% ( 53) 00:08:10.471 11695.655 - 11746.068: 68.6291% ( 50) 00:08:10.471 11746.068 - 11796.480: 69.1471% ( 60) 00:08:10.471 11796.480 - 11846.892: 69.6478% ( 58) 00:08:10.471 11846.892 - 11897.305: 70.1744% ( 61) 00:08:10.471 11897.305 - 11947.717: 70.6492% ( 55) 00:08:10.471 11947.717 - 11998.129: 71.1499% ( 58) 00:08:10.471 11998.129 - 12048.542: 71.6419% ( 57) 00:08:10.471 12048.542 - 12098.954: 72.0994% ( 53) 00:08:10.471 12098.954 - 12149.366: 72.5052% ( 47) 00:08:10.471 12149.366 - 12199.778: 72.9454% ( 51) 00:08:10.471 12199.778 - 12250.191: 73.3943% ( 52) 00:08:10.471 12250.191 - 12300.603: 73.8346% ( 51) 00:08:10.471 12300.603 - 12351.015: 74.2835% ( 52) 00:08:10.471 12351.015 - 12401.428: 74.6806% ( 46) 00:08:10.471 12401.428 - 12451.840: 75.0432% ( 42) 00:08:10.471 12451.840 - 12502.252: 75.3885% ( 40) 00:08:10.471 12502.252 - 12552.665: 75.6992% ( 36) 00:08:10.471 12552.665 - 12603.077: 76.1050% ( 47) 00:08:10.471 12603.077 - 12653.489: 76.6229% ( 60) 00:08:10.471 12653.489 - 12703.902: 77.1236% ( 58) 00:08:10.471 12703.902 - 12754.314: 77.5811% ( 53) 00:08:10.471 12754.314 - 12804.726: 77.9351% ( 41) 00:08:10.471 12804.726 - 12855.138: 78.3494% ( 48) 00:08:10.471 12855.138 - 12905.551: 78.6775% ( 38) 00:08:10.471 12905.551 - 13006.375: 79.4803% ( 93) 00:08:10.471 13006.375 - 13107.200: 80.2573% ( 90) 00:08:10.471 13107.200 - 13208.025: 80.8788% ( 72) 00:08:10.471 13208.025 - 13308.849: 81.4744% ( 69) 00:08:10.471 13308.849 - 13409.674: 82.2859% ( 94) 00:08:10.471 13409.674 - 13510.498: 83.0369% ( 87) 00:08:10.471 13510.498 - 13611.323: 83.8225% ( 91) 00:08:10.471 13611.323 - 13712.148: 84.4354% ( 71) 00:08:10.471 13712.148 - 13812.972: 85.0570% ( 72) 00:08:10.471 13812.972 - 13913.797: 85.5663% ( 59) 00:08:10.471 13913.797 - 14014.622: 86.1706% ( 70) 00:08:10.471 14014.622 - 14115.446: 86.7835% ( 71) 00:08:10.471 14115.446 - 14216.271: 87.3273% ( 63) 00:08:10.471 14216.271 - 14317.095: 87.9144% ( 68) 00:08:10.471 14317.095 - 14417.920: 88.6827% ( 89) 00:08:10.471 14417.920 - 14518.745: 89.4423% ( 88) 00:08:10.471 14518.745 - 14619.569: 90.2365% ( 92) 00:08:10.471 14619.569 - 14720.394: 91.1257% ( 103) 00:08:10.471 14720.394 - 14821.218: 92.0407% ( 106) 00:08:10.471 14821.218 - 14922.043: 92.9213% ( 102) 00:08:10.471 14922.043 - 15022.868: 93.7932% ( 101) 00:08:10.471 15022.868 - 15123.692: 94.5183% ( 84) 00:08:10.471 15123.692 - 15224.517: 95.1657% ( 75) 00:08:10.471 15224.517 - 15325.342: 95.6664% ( 58) 00:08:10.471 15325.342 - 15426.166: 96.1067% ( 51) 00:08:10.471 15426.166 - 15526.991: 96.4088% ( 35) 00:08:10.471 15526.991 - 15627.815: 96.7110% ( 35) 00:08:10.471 15627.815 - 15728.640: 96.9268% ( 25) 00:08:10.471 15728.640 - 15829.465: 97.1081% ( 21) 00:08:10.471 15829.465 - 15930.289: 97.2894% ( 21) 00:08:10.471 15930.289 - 16031.114: 97.4016% ( 13) 00:08:10.471 16031.114 - 16131.938: 97.5397% ( 16) 00:08:10.471 16131.938 - 16232.763: 97.6778% ( 16) 00:08:10.471 16232.763 - 16333.588: 97.8419% ( 19) 00:08:10.471 16333.588 - 16434.412: 97.9800% ( 16) 00:08:10.471 16434.412 - 16535.237: 98.1181% ( 16) 00:08:10.471 16535.237 - 16636.062: 98.2131% ( 11) 00:08:10.471 16636.062 - 16736.886: 98.3339% ( 14) 00:08:10.471 16736.886 - 16837.711: 98.4461% ( 13) 00:08:10.471 16837.711 - 16938.535: 98.5584% ( 13) 00:08:10.471 16938.535 - 17039.360: 98.6619% ( 12) 00:08:10.471 17039.360 - 17140.185: 98.7137% ( 6) 00:08:10.471 17140.185 - 17241.009: 98.7742% ( 7) 00:08:10.471 17241.009 - 17341.834: 98.8260% ( 6) 00:08:10.471 17341.834 - 17442.658: 98.8864% ( 7) 00:08:10.471 17442.658 - 17543.483: 98.8950% ( 1) 00:08:10.471 28029.243 - 28230.892: 98.9727% ( 9) 00:08:10.471 28230.892 - 28432.542: 99.0763% ( 12) 00:08:10.471 28432.542 - 28634.191: 99.1367% ( 7) 00:08:10.471 28634.191 - 28835.840: 99.1972% ( 7) 00:08:10.471 28835.840 - 29037.489: 99.2403% ( 5) 00:08:10.471 29037.489 - 29239.138: 99.2835% ( 5) 00:08:10.471 29239.138 - 29440.788: 99.3439% ( 7) 00:08:10.471 29440.788 - 29642.437: 99.4044% ( 7) 00:08:10.471 29642.437 - 29844.086: 99.4475% ( 5) 00:08:10.471 34885.317 - 35086.966: 99.4820% ( 4) 00:08:10.471 35086.966 - 35288.615: 99.5252% ( 5) 00:08:10.471 35288.615 - 35490.265: 99.5597% ( 4) 00:08:10.471 35490.265 - 35691.914: 99.6029% ( 5) 00:08:10.471 35691.914 - 35893.563: 99.6461% ( 5) 00:08:10.471 35893.563 - 36095.212: 99.6892% ( 5) 00:08:10.471 36095.212 - 36296.862: 99.7324% ( 5) 00:08:10.471 36296.862 - 36498.511: 99.7669% ( 4) 00:08:10.471 36498.511 - 36700.160: 99.8101% ( 5) 00:08:10.471 36700.160 - 36901.809: 99.8532% ( 5) 00:08:10.471 36901.809 - 37103.458: 99.8964% ( 5) 00:08:10.471 37103.458 - 37305.108: 99.9396% ( 5) 00:08:10.471 37305.108 - 37506.757: 99.9741% ( 4) 00:08:10.471 37506.757 - 37708.406: 100.0000% ( 3) 00:08:10.471 00:08:10.471 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:10.471 ============================================================================== 00:08:10.471 Range in us Cumulative IO count 00:08:10.471 4511.902 - 4537.108: 0.0345% ( 4) 00:08:10.471 4537.108 - 4562.314: 0.0518% ( 2) 00:08:10.471 4562.314 - 4587.520: 0.0691% ( 2) 00:08:10.471 4587.520 - 4612.726: 0.0777% ( 1) 00:08:10.471 4612.726 - 4637.932: 0.0950% ( 2) 00:08:10.471 4637.932 - 4663.138: 0.1295% ( 4) 00:08:10.471 4663.138 - 4688.345: 0.1381% ( 1) 00:08:10.471 4688.345 - 4713.551: 0.1727% ( 4) 00:08:10.471 4713.551 - 4738.757: 0.1813% ( 1) 00:08:10.471 4738.757 - 4763.963: 0.2072% ( 3) 00:08:10.471 4763.963 - 4789.169: 0.2244% ( 2) 00:08:10.471 4789.169 - 4814.375: 0.2417% ( 2) 00:08:10.471 4814.375 - 4839.582: 0.2590% ( 2) 00:08:10.471 4839.582 - 4864.788: 0.2762% ( 2) 00:08:10.471 4864.788 - 4889.994: 0.2935% ( 2) 00:08:10.471 4889.994 - 4915.200: 0.3108% ( 2) 00:08:10.471 4915.200 - 4940.406: 0.3280% ( 2) 00:08:10.471 4940.406 - 4965.612: 0.3367% ( 1) 00:08:10.471 4965.612 - 4990.818: 0.3453% ( 1) 00:08:10.471 4990.818 - 5016.025: 0.3539% ( 1) 00:08:10.471 5016.025 - 5041.231: 0.3626% ( 1) 00:08:10.471 5041.231 - 5066.437: 0.3712% ( 1) 00:08:10.471 5066.437 - 5091.643: 0.3798% ( 1) 00:08:10.471 5091.643 - 5116.849: 0.3885% ( 1) 00:08:10.471 5116.849 - 5142.055: 0.3971% ( 1) 00:08:10.471 5142.055 - 5167.262: 0.4057% ( 1) 00:08:10.471 5167.262 - 5192.468: 0.4230% ( 2) 00:08:10.471 5192.468 - 5217.674: 0.4316% ( 1) 00:08:10.471 5217.674 - 5242.880: 0.4403% ( 1) 00:08:10.471 5242.880 - 5268.086: 0.4489% ( 1) 00:08:10.471 5268.086 - 5293.292: 0.4575% ( 1) 00:08:10.471 5293.292 - 5318.498: 0.4662% ( 1) 00:08:10.471 5318.498 - 5343.705: 0.4748% ( 1) 00:08:10.471 5343.705 - 5368.911: 0.4921% ( 2) 00:08:10.471 5368.911 - 5394.117: 0.5007% ( 1) 00:08:10.471 5394.117 - 5419.323: 0.5093% ( 1) 00:08:10.471 5419.323 - 5444.529: 0.5180% ( 1) 00:08:10.471 5444.529 - 5469.735: 0.5266% ( 1) 00:08:10.471 5469.735 - 5494.942: 0.5352% ( 1) 00:08:10.472 5494.942 - 5520.148: 0.5525% ( 2) 00:08:10.472 7057.723 - 7108.135: 0.5698% ( 2) 00:08:10.472 7108.135 - 7158.548: 0.5956% ( 3) 00:08:10.472 7158.548 - 7208.960: 0.6474% ( 6) 00:08:10.472 7208.960 - 7259.372: 0.7338% ( 10) 00:08:10.472 7259.372 - 7309.785: 0.8115% ( 9) 00:08:10.472 7309.785 - 7360.197: 0.8978% ( 10) 00:08:10.472 7360.197 - 7410.609: 1.0273% ( 15) 00:08:10.472 7410.609 - 7461.022: 1.1740% ( 17) 00:08:10.472 7461.022 - 7511.434: 1.3726% ( 23) 00:08:10.472 7511.434 - 7561.846: 1.7438% ( 43) 00:08:10.472 7561.846 - 7612.258: 2.0977% ( 41) 00:08:10.472 7612.258 - 7662.671: 2.4689% ( 43) 00:08:10.472 7662.671 - 7713.083: 2.9006% ( 50) 00:08:10.472 7713.083 - 7763.495: 3.4185% ( 60) 00:08:10.472 7763.495 - 7813.908: 4.0401% ( 72) 00:08:10.472 7813.908 - 7864.320: 4.7307% ( 80) 00:08:10.472 7864.320 - 7914.732: 5.4990% ( 89) 00:08:10.472 7914.732 - 7965.145: 6.3622% ( 100) 00:08:10.472 7965.145 - 8015.557: 7.3118% ( 110) 00:08:10.472 8015.557 - 8065.969: 8.4427% ( 131) 00:08:10.472 8065.969 - 8116.382: 9.3836% ( 109) 00:08:10.472 8116.382 - 8166.794: 10.3591% ( 113) 00:08:10.472 8166.794 - 8217.206: 11.4037% ( 121) 00:08:10.472 8217.206 - 8267.618: 12.5173% ( 129) 00:08:10.472 8267.618 - 8318.031: 13.6395% ( 130) 00:08:10.472 8318.031 - 8368.443: 14.7445% ( 128) 00:08:10.472 8368.443 - 8418.855: 15.7631% ( 118) 00:08:10.472 8418.855 - 8469.268: 16.7386% ( 113) 00:08:10.472 8469.268 - 8519.680: 17.7745% ( 120) 00:08:10.472 8519.680 - 8570.092: 18.8018% ( 119) 00:08:10.472 8570.092 - 8620.505: 19.8636% ( 123) 00:08:10.472 8620.505 - 8670.917: 20.8909% ( 119) 00:08:10.472 8670.917 - 8721.329: 21.8577% ( 112) 00:08:10.472 8721.329 - 8771.742: 22.7383% ( 102) 00:08:10.472 8771.742 - 8822.154: 23.5756% ( 97) 00:08:10.472 8822.154 - 8872.566: 24.3180% ( 86) 00:08:10.472 8872.566 - 8922.978: 24.9741% ( 76) 00:08:10.472 8922.978 - 8973.391: 25.7424% ( 89) 00:08:10.472 8973.391 - 9023.803: 26.4762% ( 85) 00:08:10.472 9023.803 - 9074.215: 27.2186% ( 86) 00:08:10.472 9074.215 - 9124.628: 27.9351% ( 83) 00:08:10.472 9124.628 - 9175.040: 28.5566% ( 72) 00:08:10.472 9175.040 - 9225.452: 29.2127% ( 76) 00:08:10.472 9225.452 - 9275.865: 29.8860% ( 78) 00:08:10.472 9275.865 - 9326.277: 30.6457% ( 88) 00:08:10.472 9326.277 - 9376.689: 31.3968% ( 87) 00:08:10.472 9376.689 - 9427.102: 32.1737% ( 90) 00:08:10.472 9427.102 - 9477.514: 33.0974% ( 107) 00:08:10.472 9477.514 - 9527.926: 33.8916% ( 92) 00:08:10.472 9527.926 - 9578.338: 34.7030% ( 94) 00:08:10.472 9578.338 - 9628.751: 35.4541% ( 87) 00:08:10.472 9628.751 - 9679.163: 36.2396% ( 91) 00:08:10.472 9679.163 - 9729.575: 36.9820% ( 86) 00:08:10.472 9729.575 - 9779.988: 37.6640% ( 79) 00:08:10.472 9779.988 - 9830.400: 38.5100% ( 98) 00:08:10.472 9830.400 - 9880.812: 39.3560% ( 98) 00:08:10.472 9880.812 - 9931.225: 40.1847% ( 96) 00:08:10.472 9931.225 - 9981.637: 41.0135% ( 96) 00:08:10.472 9981.637 - 10032.049: 41.9026% ( 103) 00:08:10.472 10032.049 - 10082.462: 42.7227% ( 95) 00:08:10.472 10082.462 - 10132.874: 43.5256% ( 93) 00:08:10.472 10132.874 - 10183.286: 44.2162% ( 80) 00:08:10.472 10183.286 - 10233.698: 45.0794% ( 100) 00:08:10.472 10233.698 - 10284.111: 46.0722% ( 115) 00:08:10.472 10284.111 - 10334.523: 46.9268% ( 99) 00:08:10.472 10334.523 - 10384.935: 47.7987% ( 101) 00:08:10.472 10384.935 - 10435.348: 48.6878% ( 103) 00:08:10.472 10435.348 - 10485.760: 49.5425% ( 99) 00:08:10.472 10485.760 - 10536.172: 50.3798% ( 97) 00:08:10.472 10536.172 - 10586.585: 51.2690% ( 103) 00:08:10.472 10586.585 - 10636.997: 52.2099% ( 109) 00:08:10.472 10636.997 - 10687.409: 53.0128% ( 93) 00:08:10.472 10687.409 - 10737.822: 53.9796% ( 112) 00:08:10.472 10737.822 - 10788.234: 54.8515% ( 101) 00:08:10.472 10788.234 - 10838.646: 55.7320% ( 102) 00:08:10.472 10838.646 - 10889.058: 56.6644% ( 108) 00:08:10.472 10889.058 - 10939.471: 57.4845% ( 95) 00:08:10.472 10939.471 - 10989.883: 58.3305% ( 98) 00:08:10.472 10989.883 - 11040.295: 59.1937% ( 100) 00:08:10.472 11040.295 - 11090.708: 60.0656% ( 101) 00:08:10.472 11090.708 - 11141.120: 60.8943% ( 96) 00:08:10.472 11141.120 - 11191.532: 61.6367% ( 86) 00:08:10.472 11191.532 - 11241.945: 62.4223% ( 91) 00:08:10.472 11241.945 - 11292.357: 63.1302% ( 82) 00:08:10.472 11292.357 - 11342.769: 63.8035% ( 78) 00:08:10.472 11342.769 - 11393.182: 64.4164% ( 71) 00:08:10.472 11393.182 - 11443.594: 65.0035% ( 68) 00:08:10.472 11443.594 - 11494.006: 65.5646% ( 65) 00:08:10.472 11494.006 - 11544.418: 66.1602% ( 69) 00:08:10.472 11544.418 - 11594.831: 66.6782% ( 60) 00:08:10.472 11594.831 - 11645.243: 67.2307% ( 64) 00:08:10.472 11645.243 - 11695.655: 67.7831% ( 64) 00:08:10.472 11695.655 - 11746.068: 68.2752% ( 57) 00:08:10.472 11746.068 - 11796.480: 68.7327% ( 53) 00:08:10.472 11796.480 - 11846.892: 69.3111% ( 67) 00:08:10.472 11846.892 - 11897.305: 69.8550% ( 63) 00:08:10.472 11897.305 - 11947.717: 70.3470% ( 57) 00:08:10.472 11947.717 - 11998.129: 70.7787% ( 50) 00:08:10.472 11998.129 - 12048.542: 71.2621% ( 56) 00:08:10.472 12048.542 - 12098.954: 71.8405% ( 67) 00:08:10.472 12098.954 - 12149.366: 72.3757% ( 62) 00:08:10.472 12149.366 - 12199.778: 72.8591% ( 56) 00:08:10.472 12199.778 - 12250.191: 73.3253% ( 54) 00:08:10.472 12250.191 - 12300.603: 73.7137% ( 45) 00:08:10.472 12300.603 - 12351.015: 74.1281% ( 48) 00:08:10.472 12351.015 - 12401.428: 74.4820% ( 41) 00:08:10.472 12401.428 - 12451.840: 74.9568% ( 55) 00:08:10.472 12451.840 - 12502.252: 75.3885% ( 50) 00:08:10.472 12502.252 - 12552.665: 75.8028% ( 48) 00:08:10.472 12552.665 - 12603.077: 76.2604% ( 53) 00:08:10.472 12603.077 - 12653.489: 76.6747% ( 48) 00:08:10.472 12653.489 - 12703.902: 77.1840% ( 59) 00:08:10.472 12703.902 - 12754.314: 77.6157% ( 50) 00:08:10.472 12754.314 - 12804.726: 78.0300% ( 48) 00:08:10.472 12804.726 - 12855.138: 78.4099% ( 44) 00:08:10.472 12855.138 - 12905.551: 78.7638% ( 41) 00:08:10.472 12905.551 - 13006.375: 79.4544% ( 80) 00:08:10.472 13006.375 - 13107.200: 80.2400% ( 91) 00:08:10.472 13107.200 - 13208.025: 80.9738% ( 85) 00:08:10.472 13208.025 - 13308.849: 81.7680% ( 92) 00:08:10.472 13308.849 - 13409.674: 82.3895% ( 72) 00:08:10.472 13409.674 - 13510.498: 83.0887% ( 81) 00:08:10.472 13510.498 - 13611.323: 83.9002% ( 94) 00:08:10.472 13611.323 - 13712.148: 84.6253% ( 84) 00:08:10.472 13712.148 - 13812.972: 85.3591% ( 85) 00:08:10.472 13812.972 - 13913.797: 86.0670% ( 82) 00:08:10.472 13913.797 - 14014.622: 86.8871% ( 95) 00:08:10.472 14014.622 - 14115.446: 87.5863% ( 81) 00:08:10.472 14115.446 - 14216.271: 88.3287% ( 86) 00:08:10.472 14216.271 - 14317.095: 88.9675% ( 74) 00:08:10.472 14317.095 - 14417.920: 89.6323% ( 77) 00:08:10.472 14417.920 - 14518.745: 90.2797% ( 75) 00:08:10.472 14518.745 - 14619.569: 90.9185% ( 74) 00:08:10.472 14619.569 - 14720.394: 91.6091% ( 80) 00:08:10.472 14720.394 - 14821.218: 92.1789% ( 66) 00:08:10.472 14821.218 - 14922.043: 92.7400% ( 65) 00:08:10.472 14922.043 - 15022.868: 93.3356% ( 69) 00:08:10.472 15022.868 - 15123.692: 93.9313% ( 69) 00:08:10.472 15123.692 - 15224.517: 94.5183% ( 68) 00:08:10.472 15224.517 - 15325.342: 95.1744% ( 76) 00:08:10.472 15325.342 - 15426.166: 95.6923% ( 60) 00:08:10.472 15426.166 - 15526.991: 96.0981% ( 47) 00:08:10.472 15526.991 - 15627.815: 96.4002% ( 35) 00:08:10.472 15627.815 - 15728.640: 96.7800% ( 44) 00:08:10.472 15728.640 - 15829.465: 97.0563% ( 32) 00:08:10.472 15829.465 - 15930.289: 97.2894% ( 27) 00:08:10.472 15930.289 - 16031.114: 97.5052% ( 25) 00:08:10.472 16031.114 - 16131.938: 97.6951% ( 22) 00:08:10.472 16131.938 - 16232.763: 97.8246% ( 15) 00:08:10.472 16232.763 - 16333.588: 97.9195% ( 11) 00:08:10.472 16333.588 - 16434.412: 98.0059% ( 10) 00:08:10.472 16434.412 - 16535.237: 98.0836% ( 9) 00:08:10.472 16535.237 - 16636.062: 98.1699% ( 10) 00:08:10.472 16636.062 - 16736.886: 98.2821% ( 13) 00:08:10.472 16736.886 - 16837.711: 98.4116% ( 15) 00:08:10.472 16837.711 - 16938.535: 98.4893% ( 9) 00:08:10.472 16938.535 - 17039.360: 98.5497% ( 7) 00:08:10.472 17039.360 - 17140.185: 98.6015% ( 6) 00:08:10.472 17140.185 - 17241.009: 98.6533% ( 6) 00:08:10.472 17241.009 - 17341.834: 98.7051% ( 6) 00:08:10.472 17341.834 - 17442.658: 98.7655% ( 7) 00:08:10.472 17442.658 - 17543.483: 98.8173% ( 6) 00:08:10.472 17543.483 - 17644.308: 98.8691% ( 6) 00:08:10.472 17644.308 - 17745.132: 98.8950% ( 3) 00:08:10.472 28230.892 - 28432.542: 98.9037% ( 1) 00:08:10.472 28432.542 - 28634.191: 98.9641% ( 7) 00:08:10.472 28634.191 - 28835.840: 99.0245% ( 7) 00:08:10.472 28835.840 - 29037.489: 99.0849% ( 7) 00:08:10.472 29037.489 - 29239.138: 99.1626% ( 9) 00:08:10.472 29239.138 - 29440.788: 99.2835% ( 14) 00:08:10.472 29440.788 - 29642.437: 99.3957% ( 13) 00:08:10.472 29642.437 - 29844.086: 99.4475% ( 6) 00:08:10.472 34280.369 - 34482.018: 99.4907% ( 5) 00:08:10.472 34482.018 - 34683.668: 99.5511% ( 7) 00:08:10.472 34683.668 - 34885.317: 99.5943% ( 5) 00:08:10.472 34885.317 - 35086.966: 99.6633% ( 8) 00:08:10.472 35086.966 - 35288.615: 99.7238% ( 7) 00:08:10.472 35288.615 - 35490.265: 99.7842% ( 7) 00:08:10.472 35490.265 - 35691.914: 99.8532% ( 8) 00:08:10.472 35691.914 - 35893.563: 99.9223% ( 8) 00:08:10.472 35893.563 - 36095.212: 99.9914% ( 8) 00:08:10.472 36095.212 - 36296.862: 100.0000% ( 1) 00:08:10.472 00:08:10.472 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:10.472 ============================================================================== 00:08:10.473 Range in us Cumulative IO count 00:08:10.473 4209.428 - 4234.634: 0.0173% ( 2) 00:08:10.473 4234.634 - 4259.840: 0.0432% ( 3) 00:08:10.473 4259.840 - 4285.046: 0.0604% ( 2) 00:08:10.473 4285.046 - 4310.252: 0.0777% ( 2) 00:08:10.473 4310.252 - 4335.458: 0.0950% ( 2) 00:08:10.473 4335.458 - 4360.665: 0.1122% ( 2) 00:08:10.473 4360.665 - 4385.871: 0.1295% ( 2) 00:08:10.473 4385.871 - 4411.077: 0.1554% ( 3) 00:08:10.473 4411.077 - 4436.283: 0.1640% ( 1) 00:08:10.473 4436.283 - 4461.489: 0.1813% ( 2) 00:08:10.473 4461.489 - 4486.695: 0.1985% ( 2) 00:08:10.473 4486.695 - 4511.902: 0.2072% ( 1) 00:08:10.473 4511.902 - 4537.108: 0.2244% ( 2) 00:08:10.473 4537.108 - 4562.314: 0.2417% ( 2) 00:08:10.473 4562.314 - 4587.520: 0.2676% ( 3) 00:08:10.473 4587.520 - 4612.726: 0.2849% ( 2) 00:08:10.473 4612.726 - 4637.932: 0.3021% ( 2) 00:08:10.473 4637.932 - 4663.138: 0.3194% ( 2) 00:08:10.473 4663.138 - 4688.345: 0.3367% ( 2) 00:08:10.473 4688.345 - 4713.551: 0.3626% ( 3) 00:08:10.473 4713.551 - 4738.757: 0.3798% ( 2) 00:08:10.473 4738.757 - 4763.963: 0.3885% ( 1) 00:08:10.473 4763.963 - 4789.169: 0.4057% ( 2) 00:08:10.473 4789.169 - 4814.375: 0.4316% ( 3) 00:08:10.473 4814.375 - 4839.582: 0.4489% ( 2) 00:08:10.473 4839.582 - 4864.788: 0.4662% ( 2) 00:08:10.473 4864.788 - 4889.994: 0.4834% ( 2) 00:08:10.473 4889.994 - 4915.200: 0.5007% ( 2) 00:08:10.473 4915.200 - 4940.406: 0.5266% ( 3) 00:08:10.473 4940.406 - 4965.612: 0.5439% ( 2) 00:08:10.473 4965.612 - 4990.818: 0.5525% ( 1) 00:08:10.473 7057.723 - 7108.135: 0.5784% ( 3) 00:08:10.473 7108.135 - 7158.548: 0.6043% ( 3) 00:08:10.473 7158.548 - 7208.960: 0.6388% ( 4) 00:08:10.473 7208.960 - 7259.372: 0.7079% ( 8) 00:08:10.473 7259.372 - 7309.785: 0.7769% ( 8) 00:08:10.473 7309.785 - 7360.197: 0.8546% ( 9) 00:08:10.473 7360.197 - 7410.609: 0.9582% ( 12) 00:08:10.473 7410.609 - 7461.022: 1.1309% ( 20) 00:08:10.473 7461.022 - 7511.434: 1.3985% ( 31) 00:08:10.473 7511.434 - 7561.846: 1.6488% ( 29) 00:08:10.473 7561.846 - 7612.258: 2.0028% ( 41) 00:08:10.473 7612.258 - 7662.671: 2.3394% ( 39) 00:08:10.473 7662.671 - 7713.083: 2.7797% ( 51) 00:08:10.473 7713.083 - 7763.495: 3.1682% ( 45) 00:08:10.473 7763.495 - 7813.908: 3.7293% ( 65) 00:08:10.473 7813.908 - 7864.320: 4.3336% ( 70) 00:08:10.473 7864.320 - 7914.732: 5.0328% ( 81) 00:08:10.473 7914.732 - 7965.145: 5.9133% ( 102) 00:08:10.473 7965.145 - 8015.557: 6.8284% ( 106) 00:08:10.473 8015.557 - 8065.969: 7.7434% ( 106) 00:08:10.473 8065.969 - 8116.382: 8.8311% ( 126) 00:08:10.473 8116.382 - 8166.794: 9.8498% ( 118) 00:08:10.473 8166.794 - 8217.206: 10.9289% ( 125) 00:08:10.473 8217.206 - 8267.618: 12.0597% ( 131) 00:08:10.473 8267.618 - 8318.031: 13.2597% ( 139) 00:08:10.473 8318.031 - 8368.443: 14.3905% ( 131) 00:08:10.473 8368.443 - 8418.855: 15.4265% ( 120) 00:08:10.473 8418.855 - 8469.268: 16.4451% ( 118) 00:08:10.473 8469.268 - 8519.680: 17.4983% ( 122) 00:08:10.473 8519.680 - 8570.092: 18.4651% ( 112) 00:08:10.473 8570.092 - 8620.505: 19.4924% ( 119) 00:08:10.473 8620.505 - 8670.917: 20.4593% ( 112) 00:08:10.473 8670.917 - 8721.329: 21.3829% ( 107) 00:08:10.473 8721.329 - 8771.742: 22.2548% ( 101) 00:08:10.473 8771.742 - 8822.154: 22.9023% ( 75) 00:08:10.473 8822.154 - 8872.566: 23.5584% ( 76) 00:08:10.473 8872.566 - 8922.978: 24.1972% ( 74) 00:08:10.473 8922.978 - 8973.391: 24.8273% ( 73) 00:08:10.473 8973.391 - 9023.803: 25.5093% ( 79) 00:08:10.473 9023.803 - 9074.215: 26.1654% ( 76) 00:08:10.473 9074.215 - 9124.628: 26.7438% ( 67) 00:08:10.473 9124.628 - 9175.040: 27.4430% ( 81) 00:08:10.473 9175.040 - 9225.452: 28.2545% ( 94) 00:08:10.473 9225.452 - 9275.865: 29.1350% ( 102) 00:08:10.473 9275.865 - 9326.277: 29.9637% ( 96) 00:08:10.473 9326.277 - 9376.689: 30.7579% ( 92) 00:08:10.473 9376.689 - 9427.102: 31.6039% ( 98) 00:08:10.473 9427.102 - 9477.514: 32.3981% ( 92) 00:08:10.473 9477.514 - 9527.926: 33.1664% ( 89) 00:08:10.473 9527.926 - 9578.338: 33.9434% ( 90) 00:08:10.473 9578.338 - 9628.751: 34.8412% ( 104) 00:08:10.473 9628.751 - 9679.163: 35.7735% ( 108) 00:08:10.473 9679.163 - 9729.575: 36.6540% ( 102) 00:08:10.473 9729.575 - 9779.988: 37.4827% ( 96) 00:08:10.473 9779.988 - 9830.400: 38.3115% ( 96) 00:08:10.473 9830.400 - 9880.812: 39.2006% ( 103) 00:08:10.473 9880.812 - 9931.225: 40.2020% ( 116) 00:08:10.473 9931.225 - 9981.637: 41.2379% ( 120) 00:08:10.473 9981.637 - 10032.049: 42.2566% ( 118) 00:08:10.473 10032.049 - 10082.462: 43.2148% ( 111) 00:08:10.473 10082.462 - 10132.874: 44.1039% ( 103) 00:08:10.473 10132.874 - 10183.286: 45.0622% ( 111) 00:08:10.473 10183.286 - 10233.698: 46.0635% ( 116) 00:08:10.473 10233.698 - 10284.111: 47.1253% ( 123) 00:08:10.473 10284.111 - 10334.523: 48.0922% ( 112) 00:08:10.473 10334.523 - 10384.935: 49.0590% ( 112) 00:08:10.473 10384.935 - 10435.348: 49.9137% ( 99) 00:08:10.473 10435.348 - 10485.760: 50.7597% ( 98) 00:08:10.473 10485.760 - 10536.172: 51.6747% ( 106) 00:08:10.473 10536.172 - 10586.585: 52.5984% ( 107) 00:08:10.473 10586.585 - 10636.997: 53.6430% ( 121) 00:08:10.473 10636.997 - 10687.409: 54.5321% ( 103) 00:08:10.473 10687.409 - 10737.822: 55.4299% ( 104) 00:08:10.473 10737.822 - 10788.234: 56.3104% ( 102) 00:08:10.473 10788.234 - 10838.646: 57.1910% ( 102) 00:08:10.473 10838.646 - 10889.058: 57.9593% ( 89) 00:08:10.473 10889.058 - 10939.471: 58.8657% ( 105) 00:08:10.473 10939.471 - 10989.883: 59.5735% ( 82) 00:08:10.473 10989.883 - 11040.295: 60.2124% ( 74) 00:08:10.473 11040.295 - 11090.708: 60.9202% ( 82) 00:08:10.473 11090.708 - 11141.120: 61.5849% ( 77) 00:08:10.473 11141.120 - 11191.532: 62.3101% ( 84) 00:08:10.473 11191.532 - 11241.945: 62.9403% ( 73) 00:08:10.473 11241.945 - 11292.357: 63.4410% ( 58) 00:08:10.473 11292.357 - 11342.769: 63.9071% ( 54) 00:08:10.473 11342.769 - 11393.182: 64.4423% ( 62) 00:08:10.473 11393.182 - 11443.594: 64.8912% ( 52) 00:08:10.473 11443.594 - 11494.006: 65.3401% ( 52) 00:08:10.473 11494.006 - 11544.418: 65.7372% ( 46) 00:08:10.473 11544.418 - 11594.831: 66.0653% ( 38) 00:08:10.473 11594.831 - 11645.243: 66.5401% ( 55) 00:08:10.473 11645.243 - 11695.655: 67.0062% ( 54) 00:08:10.473 11695.655 - 11746.068: 67.4551% ( 52) 00:08:10.473 11746.068 - 11796.480: 67.9040% ( 52) 00:08:10.473 11796.480 - 11846.892: 68.3615% ( 53) 00:08:10.473 11846.892 - 11897.305: 68.8622% ( 58) 00:08:10.473 11897.305 - 11947.717: 69.3456% ( 56) 00:08:10.473 11947.717 - 11998.129: 69.7859% ( 51) 00:08:10.473 11998.129 - 12048.542: 70.2521% ( 54) 00:08:10.473 12048.542 - 12098.954: 70.7355% ( 56) 00:08:10.473 12098.954 - 12149.366: 71.2880% ( 64) 00:08:10.473 12149.366 - 12199.778: 71.8750% ( 68) 00:08:10.473 12199.778 - 12250.191: 72.4102% ( 62) 00:08:10.473 12250.191 - 12300.603: 72.9627% ( 64) 00:08:10.473 12300.603 - 12351.015: 73.5670% ( 70) 00:08:10.473 12351.015 - 12401.428: 74.1799% ( 71) 00:08:10.473 12401.428 - 12451.840: 74.7928% ( 71) 00:08:10.473 12451.840 - 12502.252: 75.3539% ( 65) 00:08:10.473 12502.252 - 12552.665: 75.9496% ( 69) 00:08:10.473 12552.665 - 12603.077: 76.4848% ( 62) 00:08:10.473 12603.077 - 12653.489: 77.0028% ( 60) 00:08:10.473 12653.489 - 12703.902: 77.4948% ( 57) 00:08:10.473 12703.902 - 12754.314: 77.9955% ( 58) 00:08:10.473 12754.314 - 12804.726: 78.4099% ( 48) 00:08:10.473 12804.726 - 12855.138: 78.8156% ( 47) 00:08:10.473 12855.138 - 12905.551: 79.2645% ( 52) 00:08:10.473 12905.551 - 13006.375: 80.0673% ( 93) 00:08:10.473 13006.375 - 13107.200: 80.9306% ( 100) 00:08:10.473 13107.200 - 13208.025: 81.8198% ( 103) 00:08:10.473 13208.025 - 13308.849: 82.6140% ( 92) 00:08:10.473 13308.849 - 13409.674: 83.3305% ( 83) 00:08:10.473 13409.674 - 13510.498: 84.0988% ( 89) 00:08:10.473 13510.498 - 13611.323: 84.9275% ( 96) 00:08:10.473 13611.323 - 13712.148: 85.6958% ( 89) 00:08:10.473 13712.148 - 13812.972: 86.4727% ( 90) 00:08:10.473 13812.972 - 13913.797: 87.0943% ( 72) 00:08:10.473 13913.797 - 14014.622: 87.6122% ( 60) 00:08:10.473 14014.622 - 14115.446: 88.1820% ( 66) 00:08:10.473 14115.446 - 14216.271: 88.6222% ( 51) 00:08:10.473 14216.271 - 14317.095: 89.0884% ( 54) 00:08:10.473 14317.095 - 14417.920: 89.7531% ( 77) 00:08:10.473 14417.920 - 14518.745: 90.4437% ( 80) 00:08:10.473 14518.745 - 14619.569: 91.1257% ( 79) 00:08:10.473 14619.569 - 14720.394: 91.6695% ( 63) 00:08:10.473 14720.394 - 14821.218: 92.2393% ( 66) 00:08:10.473 14821.218 - 14922.043: 92.7831% ( 63) 00:08:10.473 14922.043 - 15022.868: 93.3529% ( 66) 00:08:10.473 15022.868 - 15123.692: 93.9744% ( 72) 00:08:10.473 15123.692 - 15224.517: 94.4751% ( 58) 00:08:10.473 15224.517 - 15325.342: 94.8895% ( 48) 00:08:10.473 15325.342 - 15426.166: 95.1830% ( 34) 00:08:10.473 15426.166 - 15526.991: 95.5283% ( 40) 00:08:10.473 15526.991 - 15627.815: 95.9081% ( 44) 00:08:10.473 15627.815 - 15728.640: 96.1930% ( 33) 00:08:10.473 15728.640 - 15829.465: 96.5383% ( 40) 00:08:10.473 15829.465 - 15930.289: 96.8750% ( 39) 00:08:10.473 15930.289 - 16031.114: 97.1340% ( 30) 00:08:10.473 16031.114 - 16131.938: 97.3671% ( 27) 00:08:10.473 16131.938 - 16232.763: 97.6260% ( 30) 00:08:10.473 16232.763 - 16333.588: 97.8764% ( 29) 00:08:10.473 16333.588 - 16434.412: 98.0922% ( 25) 00:08:10.473 16434.412 - 16535.237: 98.3339% ( 28) 00:08:10.473 16535.237 - 16636.062: 98.4720% ( 16) 00:08:10.473 16636.062 - 16736.886: 98.5670% ( 11) 00:08:10.473 16736.886 - 16837.711: 98.6274% ( 7) 00:08:10.473 16837.711 - 16938.535: 98.6792% ( 6) 00:08:10.474 16938.535 - 17039.360: 98.7396% ( 7) 00:08:10.474 17039.360 - 17140.185: 98.7914% ( 6) 00:08:10.474 17140.185 - 17241.009: 98.8432% ( 6) 00:08:10.474 17241.009 - 17341.834: 98.8950% ( 6) 00:08:10.474 28029.243 - 28230.892: 98.9727% ( 9) 00:08:10.474 28230.892 - 28432.542: 99.0849% ( 13) 00:08:10.474 28432.542 - 28634.191: 99.1972% ( 13) 00:08:10.474 28634.191 - 28835.840: 99.3094% ( 13) 00:08:10.474 28835.840 - 29037.489: 99.4216% ( 13) 00:08:10.474 29037.489 - 29239.138: 99.4475% ( 3) 00:08:10.474 34482.018 - 34683.668: 99.5079% ( 7) 00:08:10.474 34683.668 - 34885.317: 99.5684% ( 7) 00:08:10.474 34885.317 - 35086.966: 99.6374% ( 8) 00:08:10.474 35086.966 - 35288.615: 99.6979% ( 7) 00:08:10.474 35288.615 - 35490.265: 99.7583% ( 7) 00:08:10.474 35490.265 - 35691.914: 99.8273% ( 8) 00:08:10.474 35691.914 - 35893.563: 99.8964% ( 8) 00:08:10.474 35893.563 - 36095.212: 99.9568% ( 7) 00:08:10.474 36095.212 - 36296.862: 100.0000% ( 5) 00:08:10.474 00:08:10.474 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:10.474 ============================================================================== 00:08:10.474 Range in us Cumulative IO count 00:08:10.474 3881.748 - 3906.954: 0.0086% ( 1) 00:08:10.474 3906.954 - 3932.160: 0.0345% ( 3) 00:08:10.474 3932.160 - 3957.366: 0.0518% ( 2) 00:08:10.474 3957.366 - 3982.572: 0.0777% ( 3) 00:08:10.474 3982.572 - 4007.778: 0.0863% ( 1) 00:08:10.474 4007.778 - 4032.985: 0.1036% ( 2) 00:08:10.474 4032.985 - 4058.191: 0.1295% ( 3) 00:08:10.474 4058.191 - 4083.397: 0.1468% ( 2) 00:08:10.474 4083.397 - 4108.603: 0.1640% ( 2) 00:08:10.474 4108.603 - 4133.809: 0.1813% ( 2) 00:08:10.474 4133.809 - 4159.015: 0.1985% ( 2) 00:08:10.474 4159.015 - 4184.222: 0.2244% ( 3) 00:08:10.474 4184.222 - 4209.428: 0.2417% ( 2) 00:08:10.474 4209.428 - 4234.634: 0.2590% ( 2) 00:08:10.474 4234.634 - 4259.840: 0.2762% ( 2) 00:08:10.474 4259.840 - 4285.046: 0.2935% ( 2) 00:08:10.474 4285.046 - 4310.252: 0.3194% ( 3) 00:08:10.474 4310.252 - 4335.458: 0.3367% ( 2) 00:08:10.474 4335.458 - 4360.665: 0.3453% ( 1) 00:08:10.474 4360.665 - 4385.871: 0.3626% ( 2) 00:08:10.474 4385.871 - 4411.077: 0.3798% ( 2) 00:08:10.474 4411.077 - 4436.283: 0.3971% ( 2) 00:08:10.474 4436.283 - 4461.489: 0.4144% ( 2) 00:08:10.474 4461.489 - 4486.695: 0.4316% ( 2) 00:08:10.474 4486.695 - 4511.902: 0.4575% ( 3) 00:08:10.474 4511.902 - 4537.108: 0.4748% ( 2) 00:08:10.474 4537.108 - 4562.314: 0.4921% ( 2) 00:08:10.474 4562.314 - 4587.520: 0.5093% ( 2) 00:08:10.474 4587.520 - 4612.726: 0.5266% ( 2) 00:08:10.474 4612.726 - 4637.932: 0.5439% ( 2) 00:08:10.474 4637.932 - 4663.138: 0.5525% ( 1) 00:08:10.474 7108.135 - 7158.548: 0.6043% ( 6) 00:08:10.474 7158.548 - 7208.960: 0.6561% ( 6) 00:08:10.474 7208.960 - 7259.372: 0.7165% ( 7) 00:08:10.474 7259.372 - 7309.785: 0.7856% ( 8) 00:08:10.474 7309.785 - 7360.197: 0.8633% ( 9) 00:08:10.474 7360.197 - 7410.609: 1.0273% ( 19) 00:08:10.474 7410.609 - 7461.022: 1.1395% ( 13) 00:08:10.474 7461.022 - 7511.434: 1.3381% ( 23) 00:08:10.474 7511.434 - 7561.846: 1.5711% ( 27) 00:08:10.474 7561.846 - 7612.258: 1.9423% ( 43) 00:08:10.474 7612.258 - 7662.671: 2.4344% ( 57) 00:08:10.474 7662.671 - 7713.083: 2.8574% ( 49) 00:08:10.474 7713.083 - 7763.495: 3.2545% ( 46) 00:08:10.474 7763.495 - 7813.908: 3.7983% ( 63) 00:08:10.474 7813.908 - 7864.320: 4.5062% ( 82) 00:08:10.474 7864.320 - 7914.732: 5.3004% ( 92) 00:08:10.474 7914.732 - 7965.145: 6.2500% ( 110) 00:08:10.474 7965.145 - 8015.557: 7.1478% ( 104) 00:08:10.474 8015.557 - 8065.969: 8.1233% ( 113) 00:08:10.474 8065.969 - 8116.382: 9.2110% ( 126) 00:08:10.474 8116.382 - 8166.794: 10.3419% ( 131) 00:08:10.474 8166.794 - 8217.206: 11.4382% ( 127) 00:08:10.474 8217.206 - 8267.618: 12.5777% ( 132) 00:08:10.474 8267.618 - 8318.031: 13.6136% ( 120) 00:08:10.474 8318.031 - 8368.443: 14.6581% ( 121) 00:08:10.474 8368.443 - 8418.855: 15.7286% ( 124) 00:08:10.474 8418.855 - 8469.268: 16.7990% ( 124) 00:08:10.474 8469.268 - 8519.680: 17.7055% ( 105) 00:08:10.474 8519.680 - 8570.092: 18.6378% ( 108) 00:08:10.474 8570.092 - 8620.505: 19.6133% ( 113) 00:08:10.474 8620.505 - 8670.917: 20.5715% ( 111) 00:08:10.474 8670.917 - 8721.329: 21.3916% ( 95) 00:08:10.474 8721.329 - 8771.742: 22.1081% ( 83) 00:08:10.474 8771.742 - 8822.154: 22.7555% ( 75) 00:08:10.474 8822.154 - 8872.566: 23.3512% ( 69) 00:08:10.474 8872.566 - 8922.978: 23.9814% ( 73) 00:08:10.474 8922.978 - 8973.391: 24.5684% ( 68) 00:08:10.474 8973.391 - 9023.803: 25.1899% ( 72) 00:08:10.474 9023.803 - 9074.215: 25.9410% ( 87) 00:08:10.474 9074.215 - 9124.628: 26.6920% ( 87) 00:08:10.474 9124.628 - 9175.040: 27.4171% ( 84) 00:08:10.474 9175.040 - 9225.452: 28.1509% ( 85) 00:08:10.474 9225.452 - 9275.865: 28.9537% ( 93) 00:08:10.474 9275.865 - 9326.277: 29.6702% ( 83) 00:08:10.474 9326.277 - 9376.689: 30.5939% ( 107) 00:08:10.474 9376.689 - 9427.102: 31.4485% ( 99) 00:08:10.474 9427.102 - 9477.514: 32.2341% ( 91) 00:08:10.474 9477.514 - 9527.926: 33.0110% ( 90) 00:08:10.474 9527.926 - 9578.338: 33.8570% ( 98) 00:08:10.474 9578.338 - 9628.751: 34.6599% ( 93) 00:08:10.474 9628.751 - 9679.163: 35.4800% ( 95) 00:08:10.474 9679.163 - 9729.575: 36.3260% ( 98) 00:08:10.474 9729.575 - 9779.988: 37.2756% ( 110) 00:08:10.474 9779.988 - 9830.400: 38.2079% ( 108) 00:08:10.474 9830.400 - 9880.812: 39.1229% ( 106) 00:08:10.474 9880.812 - 9931.225: 39.8826% ( 88) 00:08:10.474 9931.225 - 9981.637: 40.6595% ( 90) 00:08:10.474 9981.637 - 10032.049: 41.4278% ( 89) 00:08:10.474 10032.049 - 10082.462: 42.2393% ( 94) 00:08:10.474 10082.462 - 10132.874: 43.1889% ( 110) 00:08:10.474 10132.874 - 10183.286: 44.2507% ( 123) 00:08:10.474 10183.286 - 10233.698: 45.3384% ( 126) 00:08:10.474 10233.698 - 10284.111: 46.2362% ( 104) 00:08:10.474 10284.111 - 10334.523: 47.3671% ( 131) 00:08:10.474 10334.523 - 10384.935: 48.4116% ( 121) 00:08:10.474 10384.935 - 10435.348: 49.4475% ( 120) 00:08:10.474 10435.348 - 10485.760: 50.5093% ( 123) 00:08:10.474 10485.760 - 10536.172: 51.4762% ( 112) 00:08:10.474 10536.172 - 10586.585: 52.4171% ( 109) 00:08:10.474 10586.585 - 10636.997: 53.3408% ( 107) 00:08:10.474 10636.997 - 10687.409: 54.3163% ( 113) 00:08:10.474 10687.409 - 10737.822: 55.3177% ( 116) 00:08:10.474 10737.822 - 10788.234: 56.2845% ( 112) 00:08:10.474 10788.234 - 10838.646: 57.2514% ( 112) 00:08:10.474 10838.646 - 10889.058: 58.2700% ( 118) 00:08:10.474 10889.058 - 10939.471: 59.1333% ( 100) 00:08:10.474 10939.471 - 10989.883: 59.9534% ( 95) 00:08:10.474 10989.883 - 11040.295: 60.7994% ( 98) 00:08:10.474 11040.295 - 11090.708: 61.4555% ( 76) 00:08:10.474 11090.708 - 11141.120: 62.2151% ( 88) 00:08:10.474 11141.120 - 11191.532: 62.9575% ( 86) 00:08:10.474 11191.532 - 11241.945: 63.6481% ( 80) 00:08:10.474 11241.945 - 11292.357: 64.2524% ( 70) 00:08:10.474 11292.357 - 11342.769: 64.7358% ( 56) 00:08:10.474 11342.769 - 11393.182: 65.2452% ( 59) 00:08:10.474 11393.182 - 11443.594: 65.7200% ( 55) 00:08:10.474 11443.594 - 11494.006: 66.1602% ( 51) 00:08:10.474 11494.006 - 11544.418: 66.5573% ( 46) 00:08:10.474 11544.418 - 11594.831: 66.9631% ( 47) 00:08:10.474 11594.831 - 11645.243: 67.3688% ( 47) 00:08:10.474 11645.243 - 11695.655: 67.7486% ( 44) 00:08:10.474 11695.655 - 11746.068: 68.1026% ( 41) 00:08:10.474 11746.068 - 11796.480: 68.4651% ( 42) 00:08:10.474 11796.480 - 11846.892: 68.8450% ( 44) 00:08:10.474 11846.892 - 11897.305: 69.3370% ( 57) 00:08:10.474 11897.305 - 11947.717: 69.7686% ( 50) 00:08:10.474 11947.717 - 11998.129: 70.2003% ( 50) 00:08:10.474 11998.129 - 12048.542: 70.6146% ( 48) 00:08:10.474 12048.542 - 12098.954: 71.1326% ( 60) 00:08:10.474 12098.954 - 12149.366: 71.4865% ( 41) 00:08:10.474 12149.366 - 12199.778: 71.8836% ( 46) 00:08:10.474 12199.778 - 12250.191: 72.1771% ( 34) 00:08:10.474 12250.191 - 12300.603: 72.6260% ( 52) 00:08:10.474 12300.603 - 12351.015: 73.1526% ( 61) 00:08:10.474 12351.015 - 12401.428: 73.6360% ( 56) 00:08:10.474 12401.428 - 12451.840: 74.0936% ( 53) 00:08:10.474 12451.840 - 12502.252: 74.6461% ( 64) 00:08:10.474 12502.252 - 12552.665: 75.3626% ( 83) 00:08:10.474 12552.665 - 12603.077: 75.9323% ( 66) 00:08:10.474 12603.077 - 12653.489: 76.5107% ( 67) 00:08:10.474 12653.489 - 12703.902: 77.0805% ( 66) 00:08:10.474 12703.902 - 12754.314: 77.6761% ( 69) 00:08:10.474 12754.314 - 12804.726: 78.2977% ( 72) 00:08:10.474 12804.726 - 12855.138: 78.9192% ( 72) 00:08:10.474 12855.138 - 12905.551: 79.4372% ( 60) 00:08:10.474 12905.551 - 13006.375: 80.4990% ( 123) 00:08:10.474 13006.375 - 13107.200: 81.4227% ( 107) 00:08:10.474 13107.200 - 13208.025: 82.3550% ( 108) 00:08:10.474 13208.025 - 13308.849: 83.3736% ( 118) 00:08:10.474 13308.849 - 13409.674: 84.1506% ( 90) 00:08:10.474 13409.674 - 13510.498: 84.6685% ( 60) 00:08:10.474 13510.498 - 13611.323: 85.1951% ( 61) 00:08:10.474 13611.323 - 13712.148: 85.6872% ( 57) 00:08:10.474 13712.148 - 13812.972: 86.3346% ( 75) 00:08:10.474 13812.972 - 13913.797: 86.9734% ( 74) 00:08:10.474 13913.797 - 14014.622: 87.5863% ( 71) 00:08:10.474 14014.622 - 14115.446: 88.2856% ( 81) 00:08:10.474 14115.446 - 14216.271: 88.9071% ( 72) 00:08:10.474 14216.271 - 14317.095: 89.5459% ( 74) 00:08:10.474 14317.095 - 14417.920: 90.2538% ( 82) 00:08:10.474 14417.920 - 14518.745: 90.7977% ( 63) 00:08:10.474 14518.745 - 14619.569: 91.4710% ( 78) 00:08:10.474 14619.569 - 14720.394: 92.0062% ( 62) 00:08:10.475 14720.394 - 14821.218: 92.6191% ( 71) 00:08:10.475 14821.218 - 14922.043: 93.1112% ( 57) 00:08:10.475 14922.043 - 15022.868: 93.6032% ( 57) 00:08:10.475 15022.868 - 15123.692: 93.9917% ( 45) 00:08:10.475 15123.692 - 15224.517: 94.3974% ( 47) 00:08:10.475 15224.517 - 15325.342: 94.7859% ( 45) 00:08:10.475 15325.342 - 15426.166: 95.2089% ( 49) 00:08:10.475 15426.166 - 15526.991: 95.5801% ( 43) 00:08:10.475 15526.991 - 15627.815: 95.8995% ( 37) 00:08:10.475 15627.815 - 15728.640: 96.2794% ( 44) 00:08:10.475 15728.640 - 15829.465: 96.6419% ( 42) 00:08:10.475 15829.465 - 15930.289: 96.9527% ( 36) 00:08:10.475 15930.289 - 16031.114: 97.2376% ( 33) 00:08:10.475 16031.114 - 16131.938: 97.4793% ( 28) 00:08:10.475 16131.938 - 16232.763: 97.6692% ( 22) 00:08:10.475 16232.763 - 16333.588: 97.8246% ( 18) 00:08:10.475 16333.588 - 16434.412: 97.9886% ( 19) 00:08:10.475 16434.412 - 16535.237: 98.1267% ( 16) 00:08:10.475 16535.237 - 16636.062: 98.2390% ( 13) 00:08:10.475 16636.062 - 16736.886: 98.3598% ( 14) 00:08:10.475 16736.886 - 16837.711: 98.4634% ( 12) 00:08:10.475 16837.711 - 16938.535: 98.5066% ( 5) 00:08:10.475 16938.535 - 17039.360: 98.5584% ( 6) 00:08:10.475 17039.360 - 17140.185: 98.6102% ( 6) 00:08:10.475 17140.185 - 17241.009: 98.6619% ( 6) 00:08:10.475 17241.009 - 17341.834: 98.7137% ( 6) 00:08:10.475 17341.834 - 17442.658: 98.7742% ( 7) 00:08:10.475 17442.658 - 17543.483: 98.8260% ( 6) 00:08:10.475 17543.483 - 17644.308: 98.8778% ( 6) 00:08:10.475 17644.308 - 17745.132: 98.8950% ( 2) 00:08:10.475 27625.945 - 27827.594: 98.9209% ( 3) 00:08:10.475 27827.594 - 28029.243: 99.0245% ( 12) 00:08:10.475 28029.243 - 28230.892: 99.1367% ( 13) 00:08:10.475 28230.892 - 28432.542: 99.2490% ( 13) 00:08:10.475 28432.542 - 28634.191: 99.3612% ( 13) 00:08:10.475 28634.191 - 28835.840: 99.4475% ( 10) 00:08:10.475 34482.018 - 34683.668: 99.4734% ( 3) 00:08:10.475 34683.668 - 34885.317: 99.5338% ( 7) 00:08:10.475 34885.317 - 35086.966: 99.5856% ( 6) 00:08:10.475 35086.966 - 35288.615: 99.6461% ( 7) 00:08:10.475 35288.615 - 35490.265: 99.7151% ( 8) 00:08:10.475 35490.265 - 35691.914: 99.7756% ( 7) 00:08:10.475 35691.914 - 35893.563: 99.8446% ( 8) 00:08:10.475 35893.563 - 36095.212: 99.9050% ( 7) 00:08:10.475 36095.212 - 36296.862: 99.9655% ( 7) 00:08:10.475 36296.862 - 36498.511: 100.0000% ( 4) 00:08:10.475 00:08:10.475 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:10.475 ============================================================================== 00:08:10.475 Range in us Cumulative IO count 00:08:10.475 3453.243 - 3478.449: 0.0429% ( 5) 00:08:10.475 3478.449 - 3503.655: 0.0601% ( 2) 00:08:10.475 3503.655 - 3528.862: 0.0773% ( 2) 00:08:10.475 3528.862 - 3554.068: 0.0944% ( 2) 00:08:10.475 3554.068 - 3579.274: 0.1116% ( 2) 00:08:10.475 3579.274 - 3604.480: 0.1288% ( 2) 00:08:10.475 3604.480 - 3629.686: 0.1459% ( 2) 00:08:10.475 3629.686 - 3654.892: 0.1631% ( 2) 00:08:10.475 3654.892 - 3680.098: 0.1803% ( 2) 00:08:10.475 3680.098 - 3705.305: 0.1889% ( 1) 00:08:10.475 3705.305 - 3730.511: 0.2060% ( 2) 00:08:10.475 3730.511 - 3755.717: 0.2232% ( 2) 00:08:10.475 3755.717 - 3780.923: 0.2318% ( 1) 00:08:10.475 3780.923 - 3806.129: 0.2490% ( 2) 00:08:10.475 3806.129 - 3831.335: 0.2661% ( 2) 00:08:10.475 3831.335 - 3856.542: 0.2833% ( 2) 00:08:10.475 3856.542 - 3881.748: 0.3005% ( 2) 00:08:10.475 3881.748 - 3906.954: 0.3177% ( 2) 00:08:10.475 3906.954 - 3932.160: 0.3348% ( 2) 00:08:10.475 3932.160 - 3957.366: 0.3520% ( 2) 00:08:10.475 3957.366 - 3982.572: 0.3692% ( 2) 00:08:10.475 3982.572 - 4007.778: 0.3863% ( 2) 00:08:10.475 4007.778 - 4032.985: 0.4035% ( 2) 00:08:10.475 4032.985 - 4058.191: 0.4207% ( 2) 00:08:10.475 4058.191 - 4083.397: 0.4378% ( 2) 00:08:10.475 4083.397 - 4108.603: 0.4636% ( 3) 00:08:10.475 4108.603 - 4133.809: 0.4808% ( 2) 00:08:10.475 4133.809 - 4159.015: 0.4979% ( 2) 00:08:10.475 4159.015 - 4184.222: 0.5151% ( 2) 00:08:10.475 4184.222 - 4209.428: 0.5323% ( 2) 00:08:10.475 4209.428 - 4234.634: 0.5495% ( 2) 00:08:10.475 7158.548 - 7208.960: 0.6696% ( 14) 00:08:10.475 7208.960 - 7259.372: 0.7898% ( 14) 00:08:10.475 7259.372 - 7309.785: 0.8843% ( 11) 00:08:10.475 7309.785 - 7360.197: 0.9959% ( 13) 00:08:10.475 7360.197 - 7410.609: 1.1676% ( 20) 00:08:10.475 7410.609 - 7461.022: 1.3565% ( 22) 00:08:10.475 7461.022 - 7511.434: 1.5883% ( 27) 00:08:10.475 7511.434 - 7561.846: 1.8802% ( 34) 00:08:10.475 7561.846 - 7612.258: 2.2236% ( 40) 00:08:10.475 7612.258 - 7662.671: 2.6700% ( 52) 00:08:10.475 7662.671 - 7713.083: 3.1164% ( 52) 00:08:10.475 7713.083 - 7763.495: 3.7088% ( 69) 00:08:10.475 7763.495 - 7813.908: 4.3527% ( 75) 00:08:10.475 7813.908 - 7864.320: 4.9193% ( 66) 00:08:10.475 7864.320 - 7914.732: 5.6920% ( 90) 00:08:10.475 7914.732 - 7965.145: 6.6192% ( 108) 00:08:10.475 7965.145 - 8015.557: 7.6666% ( 122) 00:08:10.475 8015.557 - 8065.969: 8.5938% ( 108) 00:08:10.475 8065.969 - 8116.382: 9.6154% ( 119) 00:08:10.475 8116.382 - 8166.794: 10.7315% ( 130) 00:08:10.475 8166.794 - 8217.206: 11.7102% ( 114) 00:08:10.475 8217.206 - 8267.618: 12.8091% ( 128) 00:08:10.475 8267.618 - 8318.031: 13.9251% ( 130) 00:08:10.475 8318.031 - 8368.443: 14.9296% ( 117) 00:08:10.475 8368.443 - 8418.855: 15.8396% ( 106) 00:08:10.475 8418.855 - 8469.268: 16.9643% ( 131) 00:08:10.475 8469.268 - 8519.680: 17.9087% ( 110) 00:08:10.475 8519.680 - 8570.092: 18.9131% ( 117) 00:08:10.475 8570.092 - 8620.505: 19.8832% ( 113) 00:08:10.475 8620.505 - 8670.917: 20.8620% ( 114) 00:08:10.475 8670.917 - 8721.329: 21.6089% ( 87) 00:08:10.475 8721.329 - 8771.742: 22.3644% ( 88) 00:08:10.475 8771.742 - 8822.154: 23.1370% ( 90) 00:08:10.475 8822.154 - 8872.566: 23.8753% ( 86) 00:08:10.475 8872.566 - 8922.978: 24.4677% ( 69) 00:08:10.475 8922.978 - 8973.391: 25.0687% ( 70) 00:08:10.475 8973.391 - 9023.803: 25.7297% ( 77) 00:08:10.475 9023.803 - 9074.215: 26.4080% ( 79) 00:08:10.475 9074.215 - 9124.628: 26.9918% ( 68) 00:08:10.475 9124.628 - 9175.040: 27.6099% ( 72) 00:08:10.475 9175.040 - 9225.452: 28.2624% ( 76) 00:08:10.475 9225.452 - 9275.865: 28.9749% ( 83) 00:08:10.475 9275.865 - 9326.277: 29.7047% ( 85) 00:08:10.475 9326.277 - 9376.689: 30.4945% ( 92) 00:08:10.475 9376.689 - 9427.102: 31.1985% ( 82) 00:08:10.475 9427.102 - 9477.514: 31.9454% ( 87) 00:08:10.475 9477.514 - 9527.926: 32.7266% ( 91) 00:08:10.475 9527.926 - 9578.338: 33.5079% ( 91) 00:08:10.475 9578.338 - 9628.751: 34.3407% ( 97) 00:08:10.475 9628.751 - 9679.163: 35.1133% ( 90) 00:08:10.475 9679.163 - 9729.575: 35.9461% ( 97) 00:08:10.475 9729.575 - 9779.988: 36.7617% ( 95) 00:08:10.475 9779.988 - 9830.400: 37.6202% ( 100) 00:08:10.475 9830.400 - 9880.812: 38.5817% ( 112) 00:08:10.475 9880.812 - 9931.225: 39.6463% ( 124) 00:08:10.475 9931.225 - 9981.637: 40.6679% ( 119) 00:08:10.475 9981.637 - 10032.049: 41.6209% ( 111) 00:08:10.475 10032.049 - 10082.462: 42.5137% ( 104) 00:08:10.475 10082.462 - 10132.874: 43.5010% ( 115) 00:08:10.475 10132.874 - 10183.286: 44.3939% ( 104) 00:08:10.475 10183.286 - 10233.698: 45.4413% ( 122) 00:08:10.475 10233.698 - 10284.111: 46.4457% ( 117) 00:08:10.475 10284.111 - 10334.523: 47.5446% ( 128) 00:08:10.475 10334.523 - 10384.935: 48.5405% ( 116) 00:08:10.475 10384.935 - 10435.348: 49.4849% ( 110) 00:08:10.475 10435.348 - 10485.760: 50.4550% ( 113) 00:08:10.475 10485.760 - 10536.172: 51.4852% ( 120) 00:08:10.475 10536.172 - 10586.585: 52.6099% ( 131) 00:08:10.475 10586.585 - 10636.997: 53.6144% ( 117) 00:08:10.475 10636.997 - 10687.409: 54.6188% ( 117) 00:08:10.475 10687.409 - 10737.822: 55.6061% ( 115) 00:08:10.475 10737.822 - 10788.234: 56.4990% ( 104) 00:08:10.475 10788.234 - 10838.646: 57.3403% ( 98) 00:08:10.475 10838.646 - 10889.058: 58.1044% ( 89) 00:08:10.475 10889.058 - 10939.471: 58.8170% ( 83) 00:08:10.475 10939.471 - 10989.883: 59.5467% ( 85) 00:08:10.475 10989.883 - 11040.295: 60.2593% ( 83) 00:08:10.475 11040.295 - 11090.708: 60.9375% ( 79) 00:08:10.475 11090.708 - 11141.120: 61.6415% ( 82) 00:08:10.475 11141.120 - 11191.532: 62.2424% ( 70) 00:08:10.475 11191.532 - 11241.945: 62.8520% ( 71) 00:08:10.475 11241.945 - 11292.357: 63.5045% ( 76) 00:08:10.475 11292.357 - 11342.769: 64.1398% ( 74) 00:08:10.475 11342.769 - 11393.182: 64.7751% ( 74) 00:08:10.475 11393.182 - 11443.594: 65.3674% ( 69) 00:08:10.475 11443.594 - 11494.006: 65.9512% ( 68) 00:08:10.475 11494.006 - 11544.418: 66.5093% ( 65) 00:08:10.475 11544.418 - 11594.831: 67.0158% ( 59) 00:08:10.475 11594.831 - 11645.243: 67.4794% ( 54) 00:08:10.475 11645.243 - 11695.655: 67.9602% ( 56) 00:08:10.475 11695.655 - 11746.068: 68.4238% ( 54) 00:08:10.475 11746.068 - 11796.480: 68.8874% ( 54) 00:08:10.475 11796.480 - 11846.892: 69.3681% ( 56) 00:08:10.475 11846.892 - 11897.305: 69.9348% ( 66) 00:08:10.475 11897.305 - 11947.717: 70.3812% ( 52) 00:08:10.475 11947.717 - 11998.129: 70.8791% ( 58) 00:08:10.475 11998.129 - 12048.542: 71.3255% ( 52) 00:08:10.475 12048.542 - 12098.954: 71.7891% ( 54) 00:08:10.475 12098.954 - 12149.366: 72.2527% ( 54) 00:08:10.475 12149.366 - 12199.778: 72.6820% ( 50) 00:08:10.475 12199.778 - 12250.191: 73.0683% ( 45) 00:08:10.475 12250.191 - 12300.603: 73.5148% ( 52) 00:08:10.475 12300.603 - 12351.015: 73.8496% ( 39) 00:08:10.475 12351.015 - 12401.428: 74.2102% ( 42) 00:08:10.475 12401.428 - 12451.840: 74.5536% ( 40) 00:08:10.475 12451.840 - 12502.252: 74.9657% ( 48) 00:08:10.475 12502.252 - 12552.665: 75.3520% ( 45) 00:08:10.476 12552.665 - 12603.077: 75.6696% ( 37) 00:08:10.476 12603.077 - 12653.489: 76.2534% ( 68) 00:08:10.476 12653.489 - 12703.902: 76.6741% ( 49) 00:08:10.476 12703.902 - 12754.314: 77.0261% ( 41) 00:08:10.476 12754.314 - 12804.726: 77.4468% ( 49) 00:08:10.476 12804.726 - 12855.138: 77.9619% ( 60) 00:08:10.476 12855.138 - 12905.551: 78.5714% ( 71) 00:08:10.476 12905.551 - 13006.375: 79.6274% ( 123) 00:08:10.476 13006.375 - 13107.200: 80.5117% ( 103) 00:08:10.476 13107.200 - 13208.025: 81.4389% ( 108) 00:08:10.476 13208.025 - 13308.849: 82.5893% ( 134) 00:08:10.476 13308.849 - 13409.674: 83.6710% ( 126) 00:08:10.476 13409.674 - 13510.498: 84.4008% ( 85) 00:08:10.476 13510.498 - 13611.323: 85.1477% ( 87) 00:08:10.476 13611.323 - 13712.148: 85.7400% ( 69) 00:08:10.476 13712.148 - 13812.972: 86.3410% ( 70) 00:08:10.476 13812.972 - 13913.797: 86.8561% ( 60) 00:08:10.476 13913.797 - 14014.622: 87.3455% ( 57) 00:08:10.476 14014.622 - 14115.446: 87.7661% ( 49) 00:08:10.476 14115.446 - 14216.271: 88.3156% ( 64) 00:08:10.476 14216.271 - 14317.095: 89.0367% ( 84) 00:08:10.476 14317.095 - 14417.920: 89.5261% ( 57) 00:08:10.476 14417.920 - 14518.745: 90.1013% ( 67) 00:08:10.476 14518.745 - 14619.569: 90.8740% ( 90) 00:08:10.476 14619.569 - 14720.394: 91.5780% ( 82) 00:08:10.476 14720.394 - 14821.218: 92.4279% ( 99) 00:08:10.476 14821.218 - 14922.043: 93.1576% ( 85) 00:08:10.476 14922.043 - 15022.868: 93.8187% ( 77) 00:08:10.476 15022.868 - 15123.692: 94.3510% ( 62) 00:08:10.476 15123.692 - 15224.517: 94.7716% ( 49) 00:08:10.476 15224.517 - 15325.342: 95.2095% ( 51) 00:08:10.476 15325.342 - 15426.166: 95.6559% ( 52) 00:08:10.476 15426.166 - 15526.991: 95.9821% ( 38) 00:08:10.476 15526.991 - 15627.815: 96.3170% ( 39) 00:08:10.476 15627.815 - 15728.640: 96.5659% ( 29) 00:08:10.476 15728.640 - 15829.465: 96.7291% ( 19) 00:08:10.476 15829.465 - 15930.289: 96.8836% ( 18) 00:08:10.476 15930.289 - 16031.114: 97.0381% ( 18) 00:08:10.476 16031.114 - 16131.938: 97.1669% ( 15) 00:08:10.476 16131.938 - 16232.763: 97.3043% ( 16) 00:08:10.476 16232.763 - 16333.588: 97.4588% ( 18) 00:08:10.476 16333.588 - 16434.412: 97.5876% ( 15) 00:08:10.476 16434.412 - 16535.237: 97.7593% ( 20) 00:08:10.476 16535.237 - 16636.062: 97.9481% ( 22) 00:08:10.476 16636.062 - 16736.886: 98.1198% ( 20) 00:08:10.476 16736.886 - 16837.711: 98.2830% ( 19) 00:08:10.476 16837.711 - 16938.535: 98.3946% ( 13) 00:08:10.476 16938.535 - 17039.360: 98.4976% ( 12) 00:08:10.476 17039.360 - 17140.185: 98.6006% ( 12) 00:08:10.476 17140.185 - 17241.009: 98.7122% ( 13) 00:08:10.476 17241.009 - 17341.834: 98.8152% ( 12) 00:08:10.476 17341.834 - 17442.658: 98.8753% ( 7) 00:08:10.476 17442.658 - 17543.483: 98.9011% ( 3) 00:08:10.476 21979.766 - 22080.591: 98.9354% ( 4) 00:08:10.476 22080.591 - 22181.415: 98.9955% ( 7) 00:08:10.476 22181.415 - 22282.240: 99.0470% ( 6) 00:08:10.476 22282.240 - 22383.065: 99.1071% ( 7) 00:08:10.476 22383.065 - 22483.889: 99.1672% ( 7) 00:08:10.476 22483.889 - 22584.714: 99.2188% ( 6) 00:08:10.476 22584.714 - 22685.538: 99.2788% ( 7) 00:08:10.476 22685.538 - 22786.363: 99.3389% ( 7) 00:08:10.476 22786.363 - 22887.188: 99.3905% ( 6) 00:08:10.476 22887.188 - 22988.012: 99.4505% ( 7) 00:08:10.476 27827.594 - 28029.243: 99.4591% ( 1) 00:08:10.476 28029.243 - 28230.892: 99.5793% ( 14) 00:08:10.476 28230.892 - 28432.542: 99.6909% ( 13) 00:08:10.476 28432.542 - 28634.191: 99.8025% ( 13) 00:08:10.476 28634.191 - 28835.840: 99.9227% ( 14) 00:08:10.476 28835.840 - 29037.489: 100.0000% ( 9) 00:08:10.476 00:08:10.476 08:52:04 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:11.867 Initializing NVMe Controllers 00:08:11.867 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:11.867 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:11.867 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:11.867 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:11.867 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:11.867 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:11.867 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:11.867 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:11.867 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:11.867 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:11.867 Initialization complete. Launching workers. 00:08:11.867 ======================================================== 00:08:11.867 Latency(us) 00:08:11.867 Device Information : IOPS MiB/s Average min max 00:08:11.867 PCIE (0000:00:10.0) NSID 1 from core 0: 12764.88 149.59 10031.70 7004.51 26592.50 00:08:11.867 PCIE (0000:00:11.0) NSID 1 from core 0: 12764.88 149.59 10024.26 6853.24 25827.54 00:08:11.867 PCIE (0000:00:13.0) NSID 1 from core 0: 12764.88 149.59 10016.05 5466.98 25978.44 00:08:11.867 PCIE (0000:00:12.0) NSID 1 from core 0: 12764.88 149.59 10007.88 5198.88 25482.54 00:08:11.867 PCIE (0000:00:12.0) NSID 2 from core 0: 12764.88 149.59 10000.01 4878.71 25259.46 00:08:11.867 PCIE (0000:00:12.0) NSID 3 from core 0: 12764.88 149.59 9991.70 4249.53 24573.71 00:08:11.867 ======================================================== 00:08:11.867 Total : 76589.30 897.53 10011.93 4249.53 26592.50 00:08:11.867 00:08:11.867 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:11.867 ================================================================================= 00:08:11.867 1.00000% : 7511.434us 00:08:11.867 10.00000% : 8015.557us 00:08:11.867 25.00000% : 8570.092us 00:08:11.867 50.00000% : 9578.338us 00:08:11.867 75.00000% : 11040.295us 00:08:11.867 90.00000% : 12653.489us 00:08:11.867 95.00000% : 13510.498us 00:08:11.867 98.00000% : 15123.692us 00:08:11.867 99.00000% : 16434.412us 00:08:11.867 99.50000% : 18854.203us 00:08:11.867 99.90000% : 26214.400us 00:08:11.867 99.99000% : 26617.698us 00:08:11.867 99.99900% : 26617.698us 00:08:11.867 99.99990% : 26617.698us 00:08:11.867 99.99999% : 26617.698us 00:08:11.867 00:08:11.867 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:11.867 ================================================================================= 00:08:11.867 1.00000% : 7713.083us 00:08:11.867 10.00000% : 8065.969us 00:08:11.867 25.00000% : 8469.268us 00:08:11.867 50.00000% : 9578.338us 00:08:11.867 75.00000% : 10989.883us 00:08:11.867 90.00000% : 12703.902us 00:08:11.867 95.00000% : 13510.498us 00:08:11.867 98.00000% : 15022.868us 00:08:11.867 99.00000% : 16232.763us 00:08:11.867 99.50000% : 19660.800us 00:08:11.867 99.90000% : 25508.628us 00:08:11.867 99.99000% : 25811.102us 00:08:11.867 99.99900% : 26012.751us 00:08:11.867 99.99990% : 26012.751us 00:08:11.867 99.99999% : 26012.751us 00:08:11.867 00:08:11.867 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:11.867 ================================================================================= 00:08:11.867 1.00000% : 7561.846us 00:08:11.867 10.00000% : 8065.969us 00:08:11.867 25.00000% : 8469.268us 00:08:11.867 50.00000% : 9578.338us 00:08:11.867 75.00000% : 10989.883us 00:08:11.867 90.00000% : 12603.077us 00:08:11.867 95.00000% : 13510.498us 00:08:11.867 98.00000% : 14922.043us 00:08:11.867 99.00000% : 16131.938us 00:08:11.867 99.50000% : 19761.625us 00:08:11.867 99.90000% : 25811.102us 00:08:11.867 99.99000% : 26012.751us 00:08:11.867 99.99900% : 26012.751us 00:08:11.867 99.99990% : 26012.751us 00:08:11.867 99.99999% : 26012.751us 00:08:11.867 00:08:11.867 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:11.867 ================================================================================= 00:08:11.867 1.00000% : 7612.258us 00:08:11.867 10.00000% : 8065.969us 00:08:11.867 25.00000% : 8469.268us 00:08:11.867 50.00000% : 9628.751us 00:08:11.867 75.00000% : 10939.471us 00:08:11.867 90.00000% : 12603.077us 00:08:11.868 95.00000% : 13611.323us 00:08:11.868 98.00000% : 14922.043us 00:08:11.868 99.00000% : 16232.763us 00:08:11.868 99.50000% : 19761.625us 00:08:11.868 99.90000% : 25105.329us 00:08:11.868 99.99000% : 25508.628us 00:08:11.868 99.99900% : 25508.628us 00:08:11.868 99.99990% : 25508.628us 00:08:11.868 99.99999% : 25508.628us 00:08:11.868 00:08:11.868 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:11.868 ================================================================================= 00:08:11.868 1.00000% : 7511.434us 00:08:11.868 10.00000% : 8065.969us 00:08:11.868 25.00000% : 8469.268us 00:08:11.868 50.00000% : 9527.926us 00:08:11.868 75.00000% : 10989.883us 00:08:11.868 90.00000% : 12603.077us 00:08:11.868 95.00000% : 13510.498us 00:08:11.868 98.00000% : 15022.868us 00:08:11.868 99.00000% : 16434.412us 00:08:11.868 99.50000% : 19660.800us 00:08:11.868 99.90000% : 25004.505us 00:08:11.868 99.99000% : 25306.978us 00:08:11.868 99.99900% : 25306.978us 00:08:11.868 99.99990% : 25306.978us 00:08:11.868 99.99999% : 25306.978us 00:08:11.868 00:08:11.868 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:11.868 ================================================================================= 00:08:11.868 1.00000% : 7612.258us 00:08:11.868 10.00000% : 8065.969us 00:08:11.868 25.00000% : 8469.268us 00:08:11.868 50.00000% : 9578.338us 00:08:11.868 75.00000% : 11040.295us 00:08:11.868 90.00000% : 12603.077us 00:08:11.868 95.00000% : 13510.498us 00:08:11.868 98.00000% : 14922.043us 00:08:11.868 99.00000% : 16434.412us 00:08:11.868 99.50000% : 19257.502us 00:08:11.868 99.90000% : 24399.557us 00:08:11.868 99.99000% : 24601.206us 00:08:11.868 99.99900% : 24601.206us 00:08:11.868 99.99990% : 24601.206us 00:08:11.868 99.99999% : 24601.206us 00:08:11.868 00:08:11.868 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:11.868 ============================================================================== 00:08:11.868 Range in us Cumulative IO count 00:08:11.868 6956.898 - 7007.311: 0.0156% ( 2) 00:08:11.868 7007.311 - 7057.723: 0.0391% ( 3) 00:08:11.868 7057.723 - 7108.135: 0.0859% ( 6) 00:08:11.868 7108.135 - 7158.548: 0.0938% ( 1) 00:08:11.868 7158.548 - 7208.960: 0.1172% ( 3) 00:08:11.868 7208.960 - 7259.372: 0.1797% ( 8) 00:08:11.868 7259.372 - 7309.785: 0.3281% ( 19) 00:08:11.868 7309.785 - 7360.197: 0.4141% ( 11) 00:08:11.868 7360.197 - 7410.609: 0.5781% ( 21) 00:08:11.868 7410.609 - 7461.022: 0.7266% ( 19) 00:08:11.868 7461.022 - 7511.434: 1.0859% ( 46) 00:08:11.868 7511.434 - 7561.846: 1.3594% ( 35) 00:08:11.868 7561.846 - 7612.258: 1.7109% ( 45) 00:08:11.868 7612.258 - 7662.671: 2.3125% ( 77) 00:08:11.868 7662.671 - 7713.083: 3.0625% ( 96) 00:08:11.868 7713.083 - 7763.495: 4.0000% ( 120) 00:08:11.868 7763.495 - 7813.908: 5.3906% ( 178) 00:08:11.868 7813.908 - 7864.320: 6.8203% ( 183) 00:08:11.868 7864.320 - 7914.732: 8.1406% ( 169) 00:08:11.868 7914.732 - 7965.145: 9.5469% ( 180) 00:08:11.868 7965.145 - 8015.557: 10.9219% ( 176) 00:08:11.868 8015.557 - 8065.969: 12.4219% ( 192) 00:08:11.868 8065.969 - 8116.382: 13.7500% ( 170) 00:08:11.868 8116.382 - 8166.794: 15.1250% ( 176) 00:08:11.868 8166.794 - 8217.206: 16.6953% ( 201) 00:08:11.868 8217.206 - 8267.618: 18.0469% ( 173) 00:08:11.868 8267.618 - 8318.031: 19.4844% ( 184) 00:08:11.868 8318.031 - 8368.443: 20.8984% ( 181) 00:08:11.868 8368.443 - 8418.855: 22.0234% ( 144) 00:08:11.868 8418.855 - 8469.268: 23.2812% ( 161) 00:08:11.868 8469.268 - 8519.680: 24.5391% ( 161) 00:08:11.868 8519.680 - 8570.092: 26.0625% ( 195) 00:08:11.868 8570.092 - 8620.505: 27.5781% ( 194) 00:08:11.868 8620.505 - 8670.917: 29.0938% ( 194) 00:08:11.868 8670.917 - 8721.329: 30.4375% ( 172) 00:08:11.868 8721.329 - 8771.742: 31.6562% ( 156) 00:08:11.868 8771.742 - 8822.154: 32.6406% ( 126) 00:08:11.868 8822.154 - 8872.566: 33.6562% ( 130) 00:08:11.868 8872.566 - 8922.978: 34.5625% ( 116) 00:08:11.868 8922.978 - 8973.391: 35.5234% ( 123) 00:08:11.868 8973.391 - 9023.803: 36.6719% ( 147) 00:08:11.868 9023.803 - 9074.215: 37.7969% ( 144) 00:08:11.868 9074.215 - 9124.628: 39.1172% ( 169) 00:08:11.868 9124.628 - 9175.040: 40.2812% ( 149) 00:08:11.868 9175.040 - 9225.452: 41.5703% ( 165) 00:08:11.868 9225.452 - 9275.865: 42.8359% ( 162) 00:08:11.868 9275.865 - 9326.277: 44.0000% ( 149) 00:08:11.868 9326.277 - 9376.689: 45.3047% ( 167) 00:08:11.868 9376.689 - 9427.102: 46.5469% ( 159) 00:08:11.868 9427.102 - 9477.514: 47.9375% ( 178) 00:08:11.868 9477.514 - 9527.926: 49.3516% ( 181) 00:08:11.868 9527.926 - 9578.338: 50.6328% ( 164) 00:08:11.868 9578.338 - 9628.751: 51.8984% ( 162) 00:08:11.868 9628.751 - 9679.163: 53.0938% ( 153) 00:08:11.868 9679.163 - 9729.575: 54.3516% ( 161) 00:08:11.868 9729.575 - 9779.988: 55.4922% ( 146) 00:08:11.868 9779.988 - 9830.400: 56.7109% ( 156) 00:08:11.868 9830.400 - 9880.812: 57.7578% ( 134) 00:08:11.868 9880.812 - 9931.225: 59.1797% ( 182) 00:08:11.868 9931.225 - 9981.637: 60.5625% ( 177) 00:08:11.868 9981.637 - 10032.049: 61.6719% ( 142) 00:08:11.868 10032.049 - 10082.462: 62.6562% ( 126) 00:08:11.868 10082.462 - 10132.874: 63.5312% ( 112) 00:08:11.868 10132.874 - 10183.286: 64.4141% ( 113) 00:08:11.868 10183.286 - 10233.698: 65.3750% ( 123) 00:08:11.868 10233.698 - 10284.111: 66.3359% ( 123) 00:08:11.868 10284.111 - 10334.523: 67.0547% ( 92) 00:08:11.868 10334.523 - 10384.935: 67.7969% ( 95) 00:08:11.868 10384.935 - 10435.348: 68.4766% ( 87) 00:08:11.868 10435.348 - 10485.760: 69.1250% ( 83) 00:08:11.868 10485.760 - 10536.172: 69.8203% ( 89) 00:08:11.868 10536.172 - 10586.585: 70.3984% ( 74) 00:08:11.868 10586.585 - 10636.997: 71.0000% ( 77) 00:08:11.868 10636.997 - 10687.409: 71.7109% ( 91) 00:08:11.868 10687.409 - 10737.822: 72.2031% ( 63) 00:08:11.868 10737.822 - 10788.234: 72.6797% ( 61) 00:08:11.868 10788.234 - 10838.646: 73.1562% ( 61) 00:08:11.868 10838.646 - 10889.058: 73.6953% ( 69) 00:08:11.868 10889.058 - 10939.471: 74.1016% ( 52) 00:08:11.868 10939.471 - 10989.883: 74.6641% ( 72) 00:08:11.868 10989.883 - 11040.295: 75.1250% ( 59) 00:08:11.868 11040.295 - 11090.708: 75.6875% ( 72) 00:08:11.868 11090.708 - 11141.120: 76.2734% ( 75) 00:08:11.868 11141.120 - 11191.532: 76.8438% ( 73) 00:08:11.868 11191.532 - 11241.945: 77.3125% ( 60) 00:08:11.868 11241.945 - 11292.357: 77.7109% ( 51) 00:08:11.868 11292.357 - 11342.769: 78.1172% ( 52) 00:08:11.868 11342.769 - 11393.182: 78.5859% ( 60) 00:08:11.868 11393.182 - 11443.594: 78.9141% ( 42) 00:08:11.868 11443.594 - 11494.006: 79.3203% ( 52) 00:08:11.868 11494.006 - 11544.418: 79.7891% ( 60) 00:08:11.868 11544.418 - 11594.831: 80.3438% ( 71) 00:08:11.868 11594.831 - 11645.243: 80.7344% ( 50) 00:08:11.868 11645.243 - 11695.655: 81.2031% ( 60) 00:08:11.868 11695.655 - 11746.068: 81.7578% ( 71) 00:08:11.868 11746.068 - 11796.480: 82.3281% ( 73) 00:08:11.868 11796.480 - 11846.892: 82.8906% ( 72) 00:08:11.868 11846.892 - 11897.305: 83.4766% ( 75) 00:08:11.868 11897.305 - 11947.717: 83.9688% ( 63) 00:08:11.868 11947.717 - 11998.129: 84.4297% ( 59) 00:08:11.868 11998.129 - 12048.542: 84.9062% ( 61) 00:08:11.868 12048.542 - 12098.954: 85.3984% ( 63) 00:08:11.868 12098.954 - 12149.366: 85.8203% ( 54) 00:08:11.868 12149.366 - 12199.778: 86.3359% ( 66) 00:08:11.868 12199.778 - 12250.191: 86.7891% ( 58) 00:08:11.868 12250.191 - 12300.603: 87.2969% ( 65) 00:08:11.868 12300.603 - 12351.015: 87.7812% ( 62) 00:08:11.868 12351.015 - 12401.428: 88.1484% ( 47) 00:08:11.868 12401.428 - 12451.840: 88.4766% ( 42) 00:08:11.868 12451.840 - 12502.252: 88.9062% ( 55) 00:08:11.868 12502.252 - 12552.665: 89.3750% ( 60) 00:08:11.868 12552.665 - 12603.077: 89.7891% ( 53) 00:08:11.868 12603.077 - 12653.489: 90.2109% ( 54) 00:08:11.868 12653.489 - 12703.902: 90.6016% ( 50) 00:08:11.868 12703.902 - 12754.314: 90.9844% ( 49) 00:08:11.868 12754.314 - 12804.726: 91.3359% ( 45) 00:08:11.868 12804.726 - 12855.138: 91.6562% ( 41) 00:08:11.868 12855.138 - 12905.551: 92.0469% ( 50) 00:08:11.868 12905.551 - 13006.375: 92.6406% ( 76) 00:08:11.868 13006.375 - 13107.200: 93.1172% ( 61) 00:08:11.868 13107.200 - 13208.025: 93.7109% ( 76) 00:08:11.868 13208.025 - 13308.849: 94.1562% ( 57) 00:08:11.868 13308.849 - 13409.674: 94.5938% ( 56) 00:08:11.868 13409.674 - 13510.498: 95.0391% ( 57) 00:08:11.868 13510.498 - 13611.323: 95.3750% ( 43) 00:08:11.868 13611.323 - 13712.148: 95.6953% ( 41) 00:08:11.868 13712.148 - 13812.972: 95.9688% ( 35) 00:08:11.868 13812.972 - 13913.797: 96.2188% ( 32) 00:08:11.868 13913.797 - 14014.622: 96.4375% ( 28) 00:08:11.868 14014.622 - 14115.446: 96.6250% ( 24) 00:08:11.868 14115.446 - 14216.271: 96.7969% ( 22) 00:08:11.868 14216.271 - 14317.095: 96.8906% ( 12) 00:08:11.868 14317.095 - 14417.920: 97.0234% ( 17) 00:08:11.868 14417.920 - 14518.745: 97.2109% ( 24) 00:08:11.868 14518.745 - 14619.569: 97.3438% ( 17) 00:08:11.868 14619.569 - 14720.394: 97.4531% ( 14) 00:08:11.868 14720.394 - 14821.218: 97.6406% ( 24) 00:08:11.868 14821.218 - 14922.043: 97.7656% ( 16) 00:08:11.868 14922.043 - 15022.868: 97.8984% ( 17) 00:08:11.869 15022.868 - 15123.692: 98.0469% ( 19) 00:08:11.869 15123.692 - 15224.517: 98.2109% ( 21) 00:08:11.869 15224.517 - 15325.342: 98.4297% ( 28) 00:08:11.869 15325.342 - 15426.166: 98.5547% ( 16) 00:08:11.869 15426.166 - 15526.991: 98.6406% ( 11) 00:08:11.869 15526.991 - 15627.815: 98.7188% ( 10) 00:08:11.869 15627.815 - 15728.640: 98.7656% ( 6) 00:08:11.869 15728.640 - 15829.465: 98.7969% ( 4) 00:08:11.869 15829.465 - 15930.289: 98.8281% ( 4) 00:08:11.869 15930.289 - 16031.114: 98.8828% ( 7) 00:08:11.869 16031.114 - 16131.938: 98.9062% ( 3) 00:08:11.869 16131.938 - 16232.763: 98.9766% ( 9) 00:08:11.869 16232.763 - 16333.588: 98.9922% ( 2) 00:08:11.869 16333.588 - 16434.412: 99.0000% ( 1) 00:08:11.869 17543.483 - 17644.308: 99.0078% ( 1) 00:08:11.869 17644.308 - 17745.132: 99.0312% ( 3) 00:08:11.869 17745.132 - 17845.957: 99.1094% ( 10) 00:08:11.869 17845.957 - 17946.782: 99.2344% ( 16) 00:08:11.869 17946.782 - 18047.606: 99.2500% ( 2) 00:08:11.869 18047.606 - 18148.431: 99.2734% ( 3) 00:08:11.869 18148.431 - 18249.255: 99.2891% ( 2) 00:08:11.869 18249.255 - 18350.080: 99.3359% ( 6) 00:08:11.869 18350.080 - 18450.905: 99.3750% ( 5) 00:08:11.869 18450.905 - 18551.729: 99.4141% ( 5) 00:08:11.869 18551.729 - 18652.554: 99.4531% ( 5) 00:08:11.869 18652.554 - 18753.378: 99.4922% ( 5) 00:08:11.869 18753.378 - 18854.203: 99.5000% ( 1) 00:08:11.869 24601.206 - 24702.031: 99.5078% ( 1) 00:08:11.869 24702.031 - 24802.855: 99.5547% ( 6) 00:08:11.869 24802.855 - 24903.680: 99.5938% ( 5) 00:08:11.869 24903.680 - 25004.505: 99.6406% ( 6) 00:08:11.869 25004.505 - 25105.329: 99.6641% ( 3) 00:08:11.869 25105.329 - 25206.154: 99.6875% ( 3) 00:08:11.869 25206.154 - 25306.978: 99.7109% ( 3) 00:08:11.869 25306.978 - 25407.803: 99.7344% ( 3) 00:08:11.869 25407.803 - 25508.628: 99.7578% ( 3) 00:08:11.869 25508.628 - 25609.452: 99.7812% ( 3) 00:08:11.869 25609.452 - 25710.277: 99.7969% ( 2) 00:08:11.869 25710.277 - 25811.102: 99.8281% ( 4) 00:08:11.869 25811.102 - 26012.751: 99.8906% ( 8) 00:08:11.869 26012.751 - 26214.400: 99.9453% ( 7) 00:08:11.869 26214.400 - 26416.049: 99.9688% ( 3) 00:08:11.869 26416.049 - 26617.698: 100.0000% ( 4) 00:08:11.869 00:08:11.869 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:11.869 ============================================================================== 00:08:11.869 Range in us Cumulative IO count 00:08:11.869 6805.662 - 6856.074: 0.0078% ( 1) 00:08:11.869 6906.486 - 6956.898: 0.0469% ( 5) 00:08:11.869 6956.898 - 7007.311: 0.0938% ( 6) 00:08:11.869 7007.311 - 7057.723: 0.1484% ( 7) 00:08:11.869 7057.723 - 7108.135: 0.2109% ( 8) 00:08:11.869 7108.135 - 7158.548: 0.3125% ( 13) 00:08:11.869 7158.548 - 7208.960: 0.3828% ( 9) 00:08:11.869 7208.960 - 7259.372: 0.4375% ( 7) 00:08:11.869 7259.372 - 7309.785: 0.4688% ( 4) 00:08:11.869 7309.785 - 7360.197: 0.5000% ( 4) 00:08:11.869 7360.197 - 7410.609: 0.5625% ( 8) 00:08:11.869 7410.609 - 7461.022: 0.5938% ( 4) 00:08:11.869 7461.022 - 7511.434: 0.6328% ( 5) 00:08:11.869 7511.434 - 7561.846: 0.6953% ( 8) 00:08:11.869 7561.846 - 7612.258: 0.7812% ( 11) 00:08:11.869 7612.258 - 7662.671: 0.9688% ( 24) 00:08:11.869 7662.671 - 7713.083: 1.2891% ( 41) 00:08:11.869 7713.083 - 7763.495: 1.8281% ( 69) 00:08:11.869 7763.495 - 7813.908: 2.6406% ( 104) 00:08:11.869 7813.908 - 7864.320: 3.7266% ( 139) 00:08:11.869 7864.320 - 7914.732: 5.0938% ( 175) 00:08:11.869 7914.732 - 7965.145: 6.6719% ( 202) 00:08:11.869 7965.145 - 8015.557: 8.6406% ( 252) 00:08:11.869 8015.557 - 8065.969: 10.7031% ( 264) 00:08:11.869 8065.969 - 8116.382: 12.7188% ( 258) 00:08:11.869 8116.382 - 8166.794: 14.6328% ( 245) 00:08:11.869 8166.794 - 8217.206: 16.7500% ( 271) 00:08:11.869 8217.206 - 8267.618: 18.8125% ( 264) 00:08:11.869 8267.618 - 8318.031: 20.8359% ( 259) 00:08:11.869 8318.031 - 8368.443: 22.5469% ( 219) 00:08:11.869 8368.443 - 8418.855: 24.1875% ( 210) 00:08:11.869 8418.855 - 8469.268: 25.5781% ( 178) 00:08:11.869 8469.268 - 8519.680: 26.8125% ( 158) 00:08:11.869 8519.680 - 8570.092: 27.7734% ( 123) 00:08:11.869 8570.092 - 8620.505: 28.5938% ( 105) 00:08:11.869 8620.505 - 8670.917: 29.3750% ( 100) 00:08:11.869 8670.917 - 8721.329: 30.3750% ( 128) 00:08:11.869 8721.329 - 8771.742: 31.3984% ( 131) 00:08:11.869 8771.742 - 8822.154: 32.3203% ( 118) 00:08:11.869 8822.154 - 8872.566: 33.1250% ( 103) 00:08:11.869 8872.566 - 8922.978: 34.2422% ( 143) 00:08:11.869 8922.978 - 8973.391: 35.1797% ( 120) 00:08:11.869 8973.391 - 9023.803: 36.2500% ( 137) 00:08:11.869 9023.803 - 9074.215: 37.2656% ( 130) 00:08:11.869 9074.215 - 9124.628: 38.3125% ( 134) 00:08:11.869 9124.628 - 9175.040: 39.5625% ( 160) 00:08:11.869 9175.040 - 9225.452: 40.7188% ( 148) 00:08:11.869 9225.452 - 9275.865: 41.8203% ( 141) 00:08:11.869 9275.865 - 9326.277: 43.0469% ( 157) 00:08:11.869 9326.277 - 9376.689: 44.4531% ( 180) 00:08:11.869 9376.689 - 9427.102: 45.8984% ( 185) 00:08:11.869 9427.102 - 9477.514: 47.3828% ( 190) 00:08:11.869 9477.514 - 9527.926: 48.6953% ( 168) 00:08:11.869 9527.926 - 9578.338: 50.3203% ( 208) 00:08:11.869 9578.338 - 9628.751: 51.7578% ( 184) 00:08:11.869 9628.751 - 9679.163: 53.4453% ( 216) 00:08:11.869 9679.163 - 9729.575: 55.2031% ( 225) 00:08:11.869 9729.575 - 9779.988: 57.0156% ( 232) 00:08:11.869 9779.988 - 9830.400: 58.6562% ( 210) 00:08:11.869 9830.400 - 9880.812: 59.9375% ( 164) 00:08:11.869 9880.812 - 9931.225: 61.1641% ( 157) 00:08:11.869 9931.225 - 9981.637: 62.2422% ( 138) 00:08:11.869 9981.637 - 10032.049: 63.0781% ( 107) 00:08:11.869 10032.049 - 10082.462: 63.9609% ( 113) 00:08:11.869 10082.462 - 10132.874: 64.8594% ( 115) 00:08:11.869 10132.874 - 10183.286: 65.5625% ( 90) 00:08:11.869 10183.286 - 10233.698: 66.2422% ( 87) 00:08:11.869 10233.698 - 10284.111: 66.9531% ( 91) 00:08:11.869 10284.111 - 10334.523: 67.5625% ( 78) 00:08:11.869 10334.523 - 10384.935: 68.1797% ( 79) 00:08:11.869 10384.935 - 10435.348: 69.0000% ( 105) 00:08:11.869 10435.348 - 10485.760: 69.6875% ( 88) 00:08:11.869 10485.760 - 10536.172: 70.2188% ( 68) 00:08:11.869 10536.172 - 10586.585: 70.8359% ( 79) 00:08:11.869 10586.585 - 10636.997: 71.3672% ( 68) 00:08:11.869 10636.997 - 10687.409: 72.0234% ( 84) 00:08:11.869 10687.409 - 10737.822: 72.5938% ( 73) 00:08:11.869 10737.822 - 10788.234: 73.1953% ( 77) 00:08:11.869 10788.234 - 10838.646: 73.8281% ( 81) 00:08:11.869 10838.646 - 10889.058: 74.3516% ( 67) 00:08:11.869 10889.058 - 10939.471: 74.9219% ( 73) 00:08:11.869 10939.471 - 10989.883: 75.3828% ( 59) 00:08:11.869 10989.883 - 11040.295: 75.9922% ( 78) 00:08:11.869 11040.295 - 11090.708: 76.4297% ( 56) 00:08:11.869 11090.708 - 11141.120: 76.9531% ( 67) 00:08:11.869 11141.120 - 11191.532: 77.4609% ( 65) 00:08:11.869 11191.532 - 11241.945: 78.0234% ( 72) 00:08:11.869 11241.945 - 11292.357: 78.4922% ( 60) 00:08:11.869 11292.357 - 11342.769: 78.9062% ( 53) 00:08:11.869 11342.769 - 11393.182: 79.2812% ( 48) 00:08:11.869 11393.182 - 11443.594: 79.6875% ( 52) 00:08:11.869 11443.594 - 11494.006: 80.0312% ( 44) 00:08:11.869 11494.006 - 11544.418: 80.4453% ( 53) 00:08:11.869 11544.418 - 11594.831: 80.8516% ( 52) 00:08:11.869 11594.831 - 11645.243: 81.2656% ( 53) 00:08:11.869 11645.243 - 11695.655: 81.6953% ( 55) 00:08:11.869 11695.655 - 11746.068: 82.0703% ( 48) 00:08:11.869 11746.068 - 11796.480: 82.6328% ( 72) 00:08:11.869 11796.480 - 11846.892: 83.1953% ( 72) 00:08:11.869 11846.892 - 11897.305: 83.6641% ( 60) 00:08:11.869 11897.305 - 11947.717: 84.1250% ( 59) 00:08:11.869 11947.717 - 11998.129: 84.5391% ( 53) 00:08:11.869 11998.129 - 12048.542: 85.0156% ( 61) 00:08:11.869 12048.542 - 12098.954: 85.4609% ( 57) 00:08:11.869 12098.954 - 12149.366: 85.8672% ( 52) 00:08:11.869 12149.366 - 12199.778: 86.3438% ( 61) 00:08:11.869 12199.778 - 12250.191: 86.6797% ( 43) 00:08:11.869 12250.191 - 12300.603: 87.0547% ( 48) 00:08:11.869 12300.603 - 12351.015: 87.4141% ( 46) 00:08:11.869 12351.015 - 12401.428: 87.8047% ( 50) 00:08:11.869 12401.428 - 12451.840: 88.2344% ( 55) 00:08:11.869 12451.840 - 12502.252: 88.6016% ( 47) 00:08:11.869 12502.252 - 12552.665: 88.9531% ( 45) 00:08:11.869 12552.665 - 12603.077: 89.4219% ( 60) 00:08:11.869 12603.077 - 12653.489: 89.7969% ( 48) 00:08:11.869 12653.489 - 12703.902: 90.1641% ( 47) 00:08:11.869 12703.902 - 12754.314: 90.5703% ( 52) 00:08:11.869 12754.314 - 12804.726: 90.9219% ( 45) 00:08:11.869 12804.726 - 12855.138: 91.2500% ( 42) 00:08:11.869 12855.138 - 12905.551: 91.5938% ( 44) 00:08:11.869 12905.551 - 13006.375: 92.2344% ( 82) 00:08:11.869 13006.375 - 13107.200: 92.9141% ( 87) 00:08:11.869 13107.200 - 13208.025: 93.5469% ( 81) 00:08:11.869 13208.025 - 13308.849: 94.3281% ( 100) 00:08:11.869 13308.849 - 13409.674: 94.7656% ( 56) 00:08:11.869 13409.674 - 13510.498: 95.1484% ( 49) 00:08:11.869 13510.498 - 13611.323: 95.5312% ( 49) 00:08:11.869 13611.323 - 13712.148: 95.8906% ( 46) 00:08:11.869 13712.148 - 13812.972: 96.1953% ( 39) 00:08:11.869 13812.972 - 13913.797: 96.4609% ( 34) 00:08:11.869 13913.797 - 14014.622: 96.7500% ( 37) 00:08:11.869 14014.622 - 14115.446: 96.9609% ( 27) 00:08:11.869 14115.446 - 14216.271: 97.1250% ( 21) 00:08:11.869 14216.271 - 14317.095: 97.2188% ( 12) 00:08:11.869 14317.095 - 14417.920: 97.3359% ( 15) 00:08:11.869 14417.920 - 14518.745: 97.4688% ( 17) 00:08:11.869 14518.745 - 14619.569: 97.5938% ( 16) 00:08:11.870 14619.569 - 14720.394: 97.7266% ( 17) 00:08:11.870 14720.394 - 14821.218: 97.8672% ( 18) 00:08:11.870 14821.218 - 14922.043: 97.9844% ( 15) 00:08:11.870 14922.043 - 15022.868: 98.0938% ( 14) 00:08:11.870 15022.868 - 15123.692: 98.1797% ( 11) 00:08:11.870 15123.692 - 15224.517: 98.3125% ( 17) 00:08:11.870 15224.517 - 15325.342: 98.4375% ( 16) 00:08:11.870 15325.342 - 15426.166: 98.5625% ( 16) 00:08:11.870 15426.166 - 15526.991: 98.6562% ( 12) 00:08:11.870 15526.991 - 15627.815: 98.7578% ( 13) 00:08:11.870 15627.815 - 15728.640: 98.8125% ( 7) 00:08:11.870 15728.640 - 15829.465: 98.8594% ( 6) 00:08:11.870 15829.465 - 15930.289: 98.8984% ( 5) 00:08:11.870 15930.289 - 16031.114: 98.9453% ( 6) 00:08:11.870 16031.114 - 16131.938: 98.9922% ( 6) 00:08:11.870 16131.938 - 16232.763: 99.0000% ( 1) 00:08:11.870 18148.431 - 18249.255: 99.0078% ( 1) 00:08:11.870 18249.255 - 18350.080: 99.0547% ( 6) 00:08:11.870 18350.080 - 18450.905: 99.0938% ( 5) 00:08:11.870 18450.905 - 18551.729: 99.1250% ( 4) 00:08:11.870 18551.729 - 18652.554: 99.1719% ( 6) 00:08:11.870 18652.554 - 18753.378: 99.2109% ( 5) 00:08:11.870 18753.378 - 18854.203: 99.2422% ( 4) 00:08:11.870 18854.203 - 18955.028: 99.2812% ( 5) 00:08:11.870 18955.028 - 19055.852: 99.3203% ( 5) 00:08:11.870 19055.852 - 19156.677: 99.3516% ( 4) 00:08:11.870 19156.677 - 19257.502: 99.3828% ( 4) 00:08:11.870 19257.502 - 19358.326: 99.4062% ( 3) 00:08:11.870 19358.326 - 19459.151: 99.4375% ( 4) 00:08:11.870 19459.151 - 19559.975: 99.4766% ( 5) 00:08:11.870 19559.975 - 19660.800: 99.5000% ( 3) 00:08:11.870 24399.557 - 24500.382: 99.5078% ( 1) 00:08:11.870 24500.382 - 24601.206: 99.5156% ( 1) 00:08:11.870 24601.206 - 24702.031: 99.5234% ( 1) 00:08:11.870 24702.031 - 24802.855: 99.5391% ( 2) 00:08:11.870 24802.855 - 24903.680: 99.5469% ( 1) 00:08:11.870 24903.680 - 25004.505: 99.6016% ( 7) 00:08:11.870 25004.505 - 25105.329: 99.7188% ( 15) 00:08:11.870 25105.329 - 25206.154: 99.7891% ( 9) 00:08:11.870 25206.154 - 25306.978: 99.8516% ( 8) 00:08:11.870 25306.978 - 25407.803: 99.8750% ( 3) 00:08:11.870 25407.803 - 25508.628: 99.9062% ( 4) 00:08:11.870 25508.628 - 25609.452: 99.9297% ( 3) 00:08:11.870 25609.452 - 25710.277: 99.9609% ( 4) 00:08:11.870 25710.277 - 25811.102: 99.9922% ( 4) 00:08:11.870 25811.102 - 26012.751: 100.0000% ( 1) 00:08:11.870 00:08:11.870 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:11.870 ============================================================================== 00:08:11.870 Range in us Cumulative IO count 00:08:11.870 5444.529 - 5469.735: 0.0078% ( 1) 00:08:11.870 5469.735 - 5494.942: 0.0234% ( 2) 00:08:11.870 5520.148 - 5545.354: 0.0391% ( 2) 00:08:11.870 5545.354 - 5570.560: 0.0547% ( 2) 00:08:11.870 5570.560 - 5595.766: 0.0703% ( 2) 00:08:11.870 5595.766 - 5620.972: 0.1094% ( 5) 00:08:11.870 5620.972 - 5646.178: 0.1797% ( 9) 00:08:11.870 5646.178 - 5671.385: 0.2734% ( 12) 00:08:11.870 5671.385 - 5696.591: 0.3359% ( 8) 00:08:11.870 5696.591 - 5721.797: 0.3828% ( 6) 00:08:11.870 5721.797 - 5747.003: 0.4141% ( 4) 00:08:11.870 5747.003 - 5772.209: 0.4219% ( 1) 00:08:11.870 5772.209 - 5797.415: 0.4375% ( 2) 00:08:11.870 5797.415 - 5822.622: 0.4531% ( 2) 00:08:11.870 5822.622 - 5847.828: 0.4609% ( 1) 00:08:11.870 5847.828 - 5873.034: 0.4766% ( 2) 00:08:11.870 5873.034 - 5898.240: 0.4922% ( 2) 00:08:11.870 5898.240 - 5923.446: 0.5000% ( 1) 00:08:11.870 7158.548 - 7208.960: 0.5078% ( 1) 00:08:11.870 7259.372 - 7309.785: 0.5234% ( 2) 00:08:11.870 7309.785 - 7360.197: 0.5547% ( 4) 00:08:11.870 7360.197 - 7410.609: 0.6016% ( 6) 00:08:11.870 7410.609 - 7461.022: 0.6406% ( 5) 00:08:11.870 7461.022 - 7511.434: 0.7109% ( 9) 00:08:11.870 7511.434 - 7561.846: 1.0078% ( 38) 00:08:11.870 7561.846 - 7612.258: 1.1797% ( 22) 00:08:11.870 7612.258 - 7662.671: 1.3672% ( 24) 00:08:11.870 7662.671 - 7713.083: 1.7188% ( 45) 00:08:11.870 7713.083 - 7763.495: 2.1641% ( 57) 00:08:11.870 7763.495 - 7813.908: 2.7656% ( 77) 00:08:11.870 7813.908 - 7864.320: 3.5703% ( 103) 00:08:11.870 7864.320 - 7914.732: 4.6484% ( 138) 00:08:11.870 7914.732 - 7965.145: 6.3906% ( 223) 00:08:11.870 7965.145 - 8015.557: 8.6562% ( 290) 00:08:11.870 8015.557 - 8065.969: 11.0312% ( 304) 00:08:11.870 8065.969 - 8116.382: 13.1328% ( 269) 00:08:11.870 8116.382 - 8166.794: 15.3750% ( 287) 00:08:11.870 8166.794 - 8217.206: 17.8047% ( 311) 00:08:11.870 8217.206 - 8267.618: 19.8281% ( 259) 00:08:11.870 8267.618 - 8318.031: 21.5391% ( 219) 00:08:11.870 8318.031 - 8368.443: 22.7266% ( 152) 00:08:11.870 8368.443 - 8418.855: 24.0000% ( 163) 00:08:11.870 8418.855 - 8469.268: 25.2266% ( 157) 00:08:11.870 8469.268 - 8519.680: 26.2891% ( 136) 00:08:11.870 8519.680 - 8570.092: 27.2969% ( 129) 00:08:11.870 8570.092 - 8620.505: 28.4141% ( 143) 00:08:11.870 8620.505 - 8670.917: 29.6797% ( 162) 00:08:11.870 8670.917 - 8721.329: 30.9219% ( 159) 00:08:11.870 8721.329 - 8771.742: 32.0312% ( 142) 00:08:11.870 8771.742 - 8822.154: 32.9141% ( 113) 00:08:11.870 8822.154 - 8872.566: 33.8672% ( 122) 00:08:11.870 8872.566 - 8922.978: 34.7109% ( 108) 00:08:11.870 8922.978 - 8973.391: 35.9844% ( 163) 00:08:11.870 8973.391 - 9023.803: 37.1016% ( 143) 00:08:11.870 9023.803 - 9074.215: 38.2109% ( 142) 00:08:11.870 9074.215 - 9124.628: 39.0781% ( 111) 00:08:11.870 9124.628 - 9175.040: 39.8750% ( 102) 00:08:11.870 9175.040 - 9225.452: 40.7656% ( 114) 00:08:11.870 9225.452 - 9275.865: 41.8438% ( 138) 00:08:11.870 9275.865 - 9326.277: 42.7969% ( 122) 00:08:11.870 9326.277 - 9376.689: 44.1250% ( 170) 00:08:11.870 9376.689 - 9427.102: 45.4922% ( 175) 00:08:11.870 9427.102 - 9477.514: 47.3438% ( 237) 00:08:11.870 9477.514 - 9527.926: 49.0078% ( 213) 00:08:11.870 9527.926 - 9578.338: 50.4453% ( 184) 00:08:11.870 9578.338 - 9628.751: 52.3281% ( 241) 00:08:11.870 9628.751 - 9679.163: 54.0000% ( 214) 00:08:11.870 9679.163 - 9729.575: 55.4766% ( 189) 00:08:11.870 9729.575 - 9779.988: 56.7656% ( 165) 00:08:11.870 9779.988 - 9830.400: 58.3281% ( 200) 00:08:11.870 9830.400 - 9880.812: 59.5859% ( 161) 00:08:11.870 9880.812 - 9931.225: 60.8438% ( 161) 00:08:11.870 9931.225 - 9981.637: 61.9688% ( 144) 00:08:11.870 9981.637 - 10032.049: 62.9062% ( 120) 00:08:11.870 10032.049 - 10082.462: 63.6094% ( 90) 00:08:11.870 10082.462 - 10132.874: 64.5312% ( 118) 00:08:11.870 10132.874 - 10183.286: 65.3281% ( 102) 00:08:11.870 10183.286 - 10233.698: 66.0078% ( 87) 00:08:11.870 10233.698 - 10284.111: 66.8516% ( 108) 00:08:11.870 10284.111 - 10334.523: 67.6406% ( 101) 00:08:11.870 10334.523 - 10384.935: 68.4219% ( 100) 00:08:11.870 10384.935 - 10435.348: 68.9297% ( 65) 00:08:11.870 10435.348 - 10485.760: 69.4531% ( 67) 00:08:11.870 10485.760 - 10536.172: 69.9766% ( 67) 00:08:11.870 10536.172 - 10586.585: 70.4688% ( 63) 00:08:11.870 10586.585 - 10636.997: 70.9688% ( 64) 00:08:11.870 10636.997 - 10687.409: 71.3984% ( 55) 00:08:11.870 10687.409 - 10737.822: 71.8672% ( 60) 00:08:11.870 10737.822 - 10788.234: 72.5078% ( 82) 00:08:11.870 10788.234 - 10838.646: 73.1562% ( 83) 00:08:11.870 10838.646 - 10889.058: 73.7969% ( 82) 00:08:11.870 10889.058 - 10939.471: 74.5391% ( 95) 00:08:11.870 10939.471 - 10989.883: 75.2266% ( 88) 00:08:11.870 10989.883 - 11040.295: 75.6875% ( 59) 00:08:11.870 11040.295 - 11090.708: 76.1484% ( 59) 00:08:11.870 11090.708 - 11141.120: 76.6641% ( 66) 00:08:11.870 11141.120 - 11191.532: 77.0000% ( 43) 00:08:11.870 11191.532 - 11241.945: 77.3594% ( 46) 00:08:11.870 11241.945 - 11292.357: 77.7969% ( 56) 00:08:11.870 11292.357 - 11342.769: 78.2734% ( 61) 00:08:11.870 11342.769 - 11393.182: 78.7188% ( 57) 00:08:11.870 11393.182 - 11443.594: 79.1250% ( 52) 00:08:11.870 11443.594 - 11494.006: 79.5859% ( 59) 00:08:11.870 11494.006 - 11544.418: 80.0703% ( 62) 00:08:11.870 11544.418 - 11594.831: 80.6641% ( 76) 00:08:11.870 11594.831 - 11645.243: 81.1094% ( 57) 00:08:11.870 11645.243 - 11695.655: 81.5234% ( 53) 00:08:11.870 11695.655 - 11746.068: 81.9609% ( 56) 00:08:11.870 11746.068 - 11796.480: 82.5625% ( 77) 00:08:11.870 11796.480 - 11846.892: 83.0469% ( 62) 00:08:11.870 11846.892 - 11897.305: 83.4609% ( 53) 00:08:11.870 11897.305 - 11947.717: 83.8984% ( 56) 00:08:11.870 11947.717 - 11998.129: 84.4688% ( 73) 00:08:11.870 11998.129 - 12048.542: 84.9375% ( 60) 00:08:11.870 12048.542 - 12098.954: 85.4375% ( 64) 00:08:11.870 12098.954 - 12149.366: 85.9375% ( 64) 00:08:11.870 12149.366 - 12199.778: 86.5312% ( 76) 00:08:11.870 12199.778 - 12250.191: 87.0625% ( 68) 00:08:11.870 12250.191 - 12300.603: 87.4922% ( 55) 00:08:11.870 12300.603 - 12351.015: 87.8906% ( 51) 00:08:11.870 12351.015 - 12401.428: 88.3203% ( 55) 00:08:11.870 12401.428 - 12451.840: 88.7969% ( 61) 00:08:11.870 12451.840 - 12502.252: 89.2578% ( 59) 00:08:11.870 12502.252 - 12552.665: 89.6719% ( 53) 00:08:11.870 12552.665 - 12603.077: 90.0859% ( 53) 00:08:11.870 12603.077 - 12653.489: 90.4688% ( 49) 00:08:11.870 12653.489 - 12703.902: 90.8438% ( 48) 00:08:11.870 12703.902 - 12754.314: 91.1484% ( 39) 00:08:11.870 12754.314 - 12804.726: 91.4453% ( 38) 00:08:11.870 12804.726 - 12855.138: 91.8125% ( 47) 00:08:11.870 12855.138 - 12905.551: 92.1641% ( 45) 00:08:11.870 12905.551 - 13006.375: 92.7344% ( 73) 00:08:11.870 13006.375 - 13107.200: 93.2031% ( 60) 00:08:11.870 13107.200 - 13208.025: 93.7031% ( 64) 00:08:11.870 13208.025 - 13308.849: 94.1875% ( 62) 00:08:11.871 13308.849 - 13409.674: 94.5938% ( 52) 00:08:11.871 13409.674 - 13510.498: 95.0078% ( 53) 00:08:11.871 13510.498 - 13611.323: 95.3125% ( 39) 00:08:11.871 13611.323 - 13712.148: 95.5703% ( 33) 00:08:11.871 13712.148 - 13812.972: 95.7969% ( 29) 00:08:11.871 13812.972 - 13913.797: 96.1172% ( 41) 00:08:11.871 13913.797 - 14014.622: 96.4531% ( 43) 00:08:11.871 14014.622 - 14115.446: 96.8281% ( 48) 00:08:11.871 14115.446 - 14216.271: 97.0547% ( 29) 00:08:11.871 14216.271 - 14317.095: 97.2188% ( 21) 00:08:11.871 14317.095 - 14417.920: 97.5312% ( 40) 00:08:11.871 14417.920 - 14518.745: 97.7031% ( 22) 00:08:11.871 14518.745 - 14619.569: 97.8359% ( 17) 00:08:11.871 14619.569 - 14720.394: 97.9375% ( 13) 00:08:11.871 14720.394 - 14821.218: 97.9844% ( 6) 00:08:11.871 14821.218 - 14922.043: 98.0000% ( 2) 00:08:11.871 14922.043 - 15022.868: 98.0078% ( 1) 00:08:11.871 15022.868 - 15123.692: 98.0625% ( 7) 00:08:11.871 15123.692 - 15224.517: 98.1797% ( 15) 00:08:11.871 15224.517 - 15325.342: 98.2891% ( 14) 00:08:11.871 15325.342 - 15426.166: 98.3984% ( 14) 00:08:11.871 15426.166 - 15526.991: 98.5156% ( 15) 00:08:11.871 15526.991 - 15627.815: 98.6250% ( 14) 00:08:11.871 15627.815 - 15728.640: 98.7188% ( 12) 00:08:11.871 15728.640 - 15829.465: 98.8125% ( 12) 00:08:11.871 15829.465 - 15930.289: 98.8984% ( 11) 00:08:11.871 15930.289 - 16031.114: 98.9688% ( 9) 00:08:11.871 16031.114 - 16131.938: 99.0000% ( 4) 00:08:11.871 18148.431 - 18249.255: 99.0312% ( 4) 00:08:11.871 18249.255 - 18350.080: 99.0859% ( 7) 00:08:11.871 18350.080 - 18450.905: 99.1172% ( 4) 00:08:11.871 18450.905 - 18551.729: 99.1562% ( 5) 00:08:11.871 18551.729 - 18652.554: 99.1953% ( 5) 00:08:11.871 18652.554 - 18753.378: 99.2344% ( 5) 00:08:11.871 18753.378 - 18854.203: 99.2578% ( 3) 00:08:11.871 18854.203 - 18955.028: 99.2969% ( 5) 00:08:11.871 18955.028 - 19055.852: 99.3203% ( 3) 00:08:11.871 19055.852 - 19156.677: 99.3438% ( 3) 00:08:11.871 19156.677 - 19257.502: 99.3672% ( 3) 00:08:11.871 19257.502 - 19358.326: 99.3906% ( 3) 00:08:11.871 19358.326 - 19459.151: 99.4219% ( 4) 00:08:11.871 19459.151 - 19559.975: 99.4375% ( 2) 00:08:11.871 19559.975 - 19660.800: 99.4688% ( 4) 00:08:11.871 19660.800 - 19761.625: 99.5000% ( 4) 00:08:11.871 24298.732 - 24399.557: 99.5312% ( 4) 00:08:11.871 24399.557 - 24500.382: 99.5469% ( 2) 00:08:11.871 24500.382 - 24601.206: 99.5625% ( 2) 00:08:11.871 24601.206 - 24702.031: 99.5859% ( 3) 00:08:11.871 25004.505 - 25105.329: 99.6250% ( 5) 00:08:11.871 25105.329 - 25206.154: 99.6797% ( 7) 00:08:11.871 25206.154 - 25306.978: 99.7344% ( 7) 00:08:11.871 25306.978 - 25407.803: 99.7812% ( 6) 00:08:11.871 25407.803 - 25508.628: 99.8125% ( 4) 00:08:11.871 25508.628 - 25609.452: 99.8594% ( 6) 00:08:11.871 25609.452 - 25710.277: 99.8906% ( 4) 00:08:11.871 25710.277 - 25811.102: 99.9297% ( 5) 00:08:11.871 25811.102 - 26012.751: 100.0000% ( 9) 00:08:11.871 00:08:11.871 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:11.871 ============================================================================== 00:08:11.871 Range in us Cumulative IO count 00:08:11.871 5192.468 - 5217.674: 0.0078% ( 1) 00:08:11.871 5343.705 - 5368.911: 0.0391% ( 4) 00:08:11.871 5368.911 - 5394.117: 0.0781% ( 5) 00:08:11.871 5394.117 - 5419.323: 0.1094% ( 4) 00:08:11.871 5419.323 - 5444.529: 0.1562% ( 6) 00:08:11.871 5444.529 - 5469.735: 0.2578% ( 13) 00:08:11.871 5469.735 - 5494.942: 0.3672% ( 14) 00:08:11.871 5494.942 - 5520.148: 0.3906% ( 3) 00:08:11.871 5520.148 - 5545.354: 0.4062% ( 2) 00:08:11.871 5545.354 - 5570.560: 0.4219% ( 2) 00:08:11.871 5570.560 - 5595.766: 0.4297% ( 1) 00:08:11.871 5595.766 - 5620.972: 0.4453% ( 2) 00:08:11.871 5620.972 - 5646.178: 0.4609% ( 2) 00:08:11.871 5646.178 - 5671.385: 0.4688% ( 1) 00:08:11.871 5671.385 - 5696.591: 0.4844% ( 2) 00:08:11.871 5696.591 - 5721.797: 0.5000% ( 2) 00:08:11.871 7309.785 - 7360.197: 0.5312% ( 4) 00:08:11.871 7360.197 - 7410.609: 0.5625% ( 4) 00:08:11.871 7410.609 - 7461.022: 0.6719% ( 14) 00:08:11.871 7461.022 - 7511.434: 0.7656% ( 12) 00:08:11.871 7511.434 - 7561.846: 0.9141% ( 19) 00:08:11.871 7561.846 - 7612.258: 1.0781% ( 21) 00:08:11.871 7612.258 - 7662.671: 1.3516% ( 35) 00:08:11.871 7662.671 - 7713.083: 1.7188% ( 47) 00:08:11.871 7713.083 - 7763.495: 2.0781% ( 46) 00:08:11.871 7763.495 - 7813.908: 2.6328% ( 71) 00:08:11.871 7813.908 - 7864.320: 3.2891% ( 84) 00:08:11.871 7864.320 - 7914.732: 4.6172% ( 170) 00:08:11.871 7914.732 - 7965.145: 6.0781% ( 187) 00:08:11.871 7965.145 - 8015.557: 7.8125% ( 222) 00:08:11.871 8015.557 - 8065.969: 10.1562% ( 300) 00:08:11.871 8065.969 - 8116.382: 12.3125% ( 276) 00:08:11.871 8116.382 - 8166.794: 14.7891% ( 317) 00:08:11.871 8166.794 - 8217.206: 16.8281% ( 261) 00:08:11.871 8217.206 - 8267.618: 19.0547% ( 285) 00:08:11.871 8267.618 - 8318.031: 20.8516% ( 230) 00:08:11.871 8318.031 - 8368.443: 22.5938% ( 223) 00:08:11.871 8368.443 - 8418.855: 23.8594% ( 162) 00:08:11.871 8418.855 - 8469.268: 25.1484% ( 165) 00:08:11.871 8469.268 - 8519.680: 26.2969% ( 147) 00:08:11.871 8519.680 - 8570.092: 27.4688% ( 150) 00:08:11.871 8570.092 - 8620.505: 28.5469% ( 138) 00:08:11.871 8620.505 - 8670.917: 29.6641% ( 143) 00:08:11.871 8670.917 - 8721.329: 30.7344% ( 137) 00:08:11.871 8721.329 - 8771.742: 31.9766% ( 159) 00:08:11.871 8771.742 - 8822.154: 32.7422% ( 98) 00:08:11.871 8822.154 - 8872.566: 33.5469% ( 103) 00:08:11.871 8872.566 - 8922.978: 34.5156% ( 124) 00:08:11.871 8922.978 - 8973.391: 35.4922% ( 125) 00:08:11.871 8973.391 - 9023.803: 36.5859% ( 140) 00:08:11.871 9023.803 - 9074.215: 37.6484% ( 136) 00:08:11.871 9074.215 - 9124.628: 38.5469% ( 115) 00:08:11.871 9124.628 - 9175.040: 39.4297% ( 113) 00:08:11.871 9175.040 - 9225.452: 40.5859% ( 148) 00:08:11.871 9225.452 - 9275.865: 41.8750% ( 165) 00:08:11.871 9275.865 - 9326.277: 43.1172% ( 159) 00:08:11.871 9326.277 - 9376.689: 44.3438% ( 157) 00:08:11.871 9376.689 - 9427.102: 45.6719% ( 170) 00:08:11.871 9427.102 - 9477.514: 47.1484% ( 189) 00:08:11.871 9477.514 - 9527.926: 48.4922% ( 172) 00:08:11.871 9527.926 - 9578.338: 49.8359% ( 172) 00:08:11.871 9578.338 - 9628.751: 51.2969% ( 187) 00:08:11.871 9628.751 - 9679.163: 53.0391% ( 223) 00:08:11.871 9679.163 - 9729.575: 54.4609% ( 182) 00:08:11.871 9729.575 - 9779.988: 55.9766% ( 194) 00:08:11.871 9779.988 - 9830.400: 57.4062% ( 183) 00:08:11.871 9830.400 - 9880.812: 59.1250% ( 220) 00:08:11.871 9880.812 - 9931.225: 60.5391% ( 181) 00:08:11.871 9931.225 - 9981.637: 61.7266% ( 152) 00:08:11.871 9981.637 - 10032.049: 63.2266% ( 192) 00:08:11.871 10032.049 - 10082.462: 64.2891% ( 136) 00:08:11.871 10082.462 - 10132.874: 65.4922% ( 154) 00:08:11.871 10132.874 - 10183.286: 66.5625% ( 137) 00:08:11.871 10183.286 - 10233.698: 67.4688% ( 116) 00:08:11.871 10233.698 - 10284.111: 68.2422% ( 99) 00:08:11.871 10284.111 - 10334.523: 69.0703% ( 106) 00:08:11.871 10334.523 - 10384.935: 69.8281% ( 97) 00:08:11.871 10384.935 - 10435.348: 70.6562% ( 106) 00:08:11.871 10435.348 - 10485.760: 71.5703% ( 117) 00:08:11.871 10485.760 - 10536.172: 72.0547% ( 62) 00:08:11.871 10536.172 - 10586.585: 72.5312% ( 61) 00:08:11.871 10586.585 - 10636.997: 72.9453% ( 53) 00:08:11.871 10636.997 - 10687.409: 73.2812% ( 43) 00:08:11.871 10687.409 - 10737.822: 73.6484% ( 47) 00:08:11.871 10737.822 - 10788.234: 74.0625% ( 53) 00:08:11.871 10788.234 - 10838.646: 74.4453% ( 49) 00:08:11.871 10838.646 - 10889.058: 74.7422% ( 38) 00:08:11.871 10889.058 - 10939.471: 75.0312% ( 37) 00:08:11.871 10939.471 - 10989.883: 75.3750% ( 44) 00:08:11.871 10989.883 - 11040.295: 75.6953% ( 41) 00:08:11.871 11040.295 - 11090.708: 75.9141% ( 28) 00:08:11.871 11090.708 - 11141.120: 76.2031% ( 37) 00:08:11.871 11141.120 - 11191.532: 76.5312% ( 42) 00:08:11.871 11191.532 - 11241.945: 76.9297% ( 51) 00:08:11.871 11241.945 - 11292.357: 77.4062% ( 61) 00:08:11.871 11292.357 - 11342.769: 77.9297% ( 67) 00:08:11.871 11342.769 - 11393.182: 78.5859% ( 84) 00:08:11.871 11393.182 - 11443.594: 79.0469% ( 59) 00:08:11.871 11443.594 - 11494.006: 79.6016% ( 71) 00:08:11.871 11494.006 - 11544.418: 80.1875% ( 75) 00:08:11.871 11544.418 - 11594.831: 80.6875% ( 64) 00:08:11.871 11594.831 - 11645.243: 81.0000% ( 40) 00:08:11.871 11645.243 - 11695.655: 81.4297% ( 55) 00:08:11.871 11695.655 - 11746.068: 81.8203% ( 50) 00:08:11.871 11746.068 - 11796.480: 82.2422% ( 54) 00:08:11.871 11796.480 - 11846.892: 82.6328% ( 50) 00:08:11.871 11846.892 - 11897.305: 83.0781% ( 57) 00:08:11.871 11897.305 - 11947.717: 83.6406% ( 72) 00:08:11.871 11947.717 - 11998.129: 84.1719% ( 68) 00:08:11.871 11998.129 - 12048.542: 84.7031% ( 68) 00:08:11.871 12048.542 - 12098.954: 85.2656% ( 72) 00:08:11.871 12098.954 - 12149.366: 85.9062% ( 82) 00:08:11.871 12149.366 - 12199.778: 86.5000% ( 76) 00:08:11.871 12199.778 - 12250.191: 87.0625% ( 72) 00:08:11.871 12250.191 - 12300.603: 87.5859% ( 67) 00:08:11.871 12300.603 - 12351.015: 88.1250% ( 69) 00:08:11.871 12351.015 - 12401.428: 88.5312% ( 52) 00:08:11.871 12401.428 - 12451.840: 88.9219% ( 50) 00:08:11.871 12451.840 - 12502.252: 89.2812% ( 46) 00:08:11.871 12502.252 - 12552.665: 89.7031% ( 54) 00:08:11.871 12552.665 - 12603.077: 90.2031% ( 64) 00:08:11.871 12603.077 - 12653.489: 90.6797% ( 61) 00:08:11.871 12653.489 - 12703.902: 91.0938% ( 53) 00:08:11.871 12703.902 - 12754.314: 91.6016% ( 65) 00:08:11.871 12754.314 - 12804.726: 91.9375% ( 43) 00:08:11.871 12804.726 - 12855.138: 92.2344% ( 38) 00:08:11.872 12855.138 - 12905.551: 92.5391% ( 39) 00:08:11.872 12905.551 - 13006.375: 93.0000% ( 59) 00:08:11.872 13006.375 - 13107.200: 93.4375% ( 56) 00:08:11.872 13107.200 - 13208.025: 93.7734% ( 43) 00:08:11.872 13208.025 - 13308.849: 94.1094% ( 43) 00:08:11.872 13308.849 - 13409.674: 94.5781% ( 60) 00:08:11.872 13409.674 - 13510.498: 94.9844% ( 52) 00:08:11.872 13510.498 - 13611.323: 95.3984% ( 53) 00:08:11.872 13611.323 - 13712.148: 95.7969% ( 51) 00:08:11.872 13712.148 - 13812.972: 96.0625% ( 34) 00:08:11.872 13812.972 - 13913.797: 96.2734% ( 27) 00:08:11.872 13913.797 - 14014.622: 96.4766% ( 26) 00:08:11.872 14014.622 - 14115.446: 96.7188% ( 31) 00:08:11.872 14115.446 - 14216.271: 97.0000% ( 36) 00:08:11.872 14216.271 - 14317.095: 97.2578% ( 33) 00:08:11.872 14317.095 - 14417.920: 97.5078% ( 32) 00:08:11.872 14417.920 - 14518.745: 97.7109% ( 26) 00:08:11.872 14518.745 - 14619.569: 97.8047% ( 12) 00:08:11.872 14619.569 - 14720.394: 97.8984% ( 12) 00:08:11.872 14720.394 - 14821.218: 97.9531% ( 7) 00:08:11.872 14821.218 - 14922.043: 98.0000% ( 6) 00:08:11.872 14922.043 - 15022.868: 98.0078% ( 1) 00:08:11.872 15022.868 - 15123.692: 98.0781% ( 9) 00:08:11.872 15123.692 - 15224.517: 98.1406% ( 8) 00:08:11.872 15224.517 - 15325.342: 98.2578% ( 15) 00:08:11.872 15325.342 - 15426.166: 98.3828% ( 16) 00:08:11.872 15426.166 - 15526.991: 98.4844% ( 13) 00:08:11.872 15526.991 - 15627.815: 98.6016% ( 15) 00:08:11.872 15627.815 - 15728.640: 98.6719% ( 9) 00:08:11.872 15728.640 - 15829.465: 98.7500% ( 10) 00:08:11.872 15829.465 - 15930.289: 98.8438% ( 12) 00:08:11.872 15930.289 - 16031.114: 98.9219% ( 10) 00:08:11.872 16031.114 - 16131.938: 98.9609% ( 5) 00:08:11.872 16131.938 - 16232.763: 99.0000% ( 5) 00:08:11.872 17845.957 - 17946.782: 99.0078% ( 1) 00:08:11.872 18249.255 - 18350.080: 99.0234% ( 2) 00:08:11.872 18350.080 - 18450.905: 99.0781% ( 7) 00:08:11.872 18450.905 - 18551.729: 99.1094% ( 4) 00:08:11.872 18551.729 - 18652.554: 99.1406% ( 4) 00:08:11.872 18652.554 - 18753.378: 99.1797% ( 5) 00:08:11.872 18753.378 - 18854.203: 99.2188% ( 5) 00:08:11.872 18854.203 - 18955.028: 99.2500% ( 4) 00:08:11.872 18955.028 - 19055.852: 99.2812% ( 4) 00:08:11.872 19055.852 - 19156.677: 99.3125% ( 4) 00:08:11.872 19156.677 - 19257.502: 99.3516% ( 5) 00:08:11.872 19257.502 - 19358.326: 99.3828% ( 4) 00:08:11.872 19358.326 - 19459.151: 99.4062% ( 3) 00:08:11.872 19459.151 - 19559.975: 99.4375% ( 4) 00:08:11.872 19559.975 - 19660.800: 99.4688% ( 4) 00:08:11.872 19660.800 - 19761.625: 99.5000% ( 4) 00:08:11.872 24097.083 - 24197.908: 99.5156% ( 2) 00:08:11.872 24197.908 - 24298.732: 99.5312% ( 2) 00:08:11.872 24298.732 - 24399.557: 99.5469% ( 2) 00:08:11.872 24399.557 - 24500.382: 99.5547% ( 1) 00:08:11.872 24500.382 - 24601.206: 99.6016% ( 6) 00:08:11.872 24601.206 - 24702.031: 99.7188% ( 15) 00:08:11.872 24702.031 - 24802.855: 99.7891% ( 9) 00:08:11.872 24802.855 - 24903.680: 99.8516% ( 8) 00:08:11.872 24903.680 - 25004.505: 99.8828% ( 4) 00:08:11.872 25004.505 - 25105.329: 99.9062% ( 3) 00:08:11.872 25105.329 - 25206.154: 99.9297% ( 3) 00:08:11.872 25206.154 - 25306.978: 99.9609% ( 4) 00:08:11.872 25306.978 - 25407.803: 99.9766% ( 2) 00:08:11.872 25407.803 - 25508.628: 100.0000% ( 3) 00:08:11.872 00:08:11.872 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:11.872 ============================================================================== 00:08:11.872 Range in us Cumulative IO count 00:08:11.872 4864.788 - 4889.994: 0.0078% ( 1) 00:08:11.872 4889.994 - 4915.200: 0.0391% ( 4) 00:08:11.872 4915.200 - 4940.406: 0.0781% ( 5) 00:08:11.872 4940.406 - 4965.612: 0.1172% ( 5) 00:08:11.872 4965.612 - 4990.818: 0.1953% ( 10) 00:08:11.872 4990.818 - 5016.025: 0.2969% ( 13) 00:08:11.872 5016.025 - 5041.231: 0.3438% ( 6) 00:08:11.872 5041.231 - 5066.437: 0.3594% ( 2) 00:08:11.872 5066.437 - 5091.643: 0.3750% ( 2) 00:08:11.872 5091.643 - 5116.849: 0.3828% ( 1) 00:08:11.872 5116.849 - 5142.055: 0.3984% ( 2) 00:08:11.872 5142.055 - 5167.262: 0.4141% ( 2) 00:08:11.872 5167.262 - 5192.468: 0.4297% ( 2) 00:08:11.872 5192.468 - 5217.674: 0.4375% ( 1) 00:08:11.872 5217.674 - 5242.880: 0.4531% ( 2) 00:08:11.872 5242.880 - 5268.086: 0.4688% ( 2) 00:08:11.872 5268.086 - 5293.292: 0.4844% ( 2) 00:08:11.872 5293.292 - 5318.498: 0.4922% ( 1) 00:08:11.872 5318.498 - 5343.705: 0.5000% ( 1) 00:08:11.872 7208.960 - 7259.372: 0.5234% ( 3) 00:08:11.872 7259.372 - 7309.785: 0.5469% ( 3) 00:08:11.872 7309.785 - 7360.197: 0.5938% ( 6) 00:08:11.872 7360.197 - 7410.609: 0.6641% ( 9) 00:08:11.872 7410.609 - 7461.022: 0.8438% ( 23) 00:08:11.872 7461.022 - 7511.434: 1.0312% ( 24) 00:08:11.872 7511.434 - 7561.846: 1.1953% ( 21) 00:08:11.872 7561.846 - 7612.258: 1.4531% ( 33) 00:08:11.872 7612.258 - 7662.671: 1.6797% ( 29) 00:08:11.872 7662.671 - 7713.083: 1.9766% ( 38) 00:08:11.872 7713.083 - 7763.495: 2.3594% ( 49) 00:08:11.872 7763.495 - 7813.908: 2.9297% ( 73) 00:08:11.872 7813.908 - 7864.320: 3.8203% ( 114) 00:08:11.872 7864.320 - 7914.732: 4.9375% ( 143) 00:08:11.872 7914.732 - 7965.145: 6.3984% ( 187) 00:08:11.872 7965.145 - 8015.557: 8.1328% ( 222) 00:08:11.872 8015.557 - 8065.969: 10.1406% ( 257) 00:08:11.872 8065.969 - 8116.382: 12.5312% ( 306) 00:08:11.872 8116.382 - 8166.794: 14.5859% ( 263) 00:08:11.872 8166.794 - 8217.206: 16.4766% ( 242) 00:08:11.872 8217.206 - 8267.618: 18.1484% ( 214) 00:08:11.872 8267.618 - 8318.031: 20.1172% ( 252) 00:08:11.872 8318.031 - 8368.443: 22.1016% ( 254) 00:08:11.872 8368.443 - 8418.855: 23.7266% ( 208) 00:08:11.872 8418.855 - 8469.268: 25.4844% ( 225) 00:08:11.872 8469.268 - 8519.680: 26.7812% ( 166) 00:08:11.872 8519.680 - 8570.092: 27.9609% ( 151) 00:08:11.872 8570.092 - 8620.505: 28.9844% ( 131) 00:08:11.872 8620.505 - 8670.917: 30.0234% ( 133) 00:08:11.872 8670.917 - 8721.329: 30.8203% ( 102) 00:08:11.872 8721.329 - 8771.742: 31.6562% ( 107) 00:08:11.872 8771.742 - 8822.154: 32.6094% ( 122) 00:08:11.872 8822.154 - 8872.566: 33.4922% ( 113) 00:08:11.872 8872.566 - 8922.978: 34.3906% ( 115) 00:08:11.872 8922.978 - 8973.391: 35.2891% ( 115) 00:08:11.872 8973.391 - 9023.803: 36.0703% ( 100) 00:08:11.872 9023.803 - 9074.215: 36.8125% ( 95) 00:08:11.872 9074.215 - 9124.628: 37.8906% ( 138) 00:08:11.872 9124.628 - 9175.040: 39.1719% ( 164) 00:08:11.872 9175.040 - 9225.452: 40.4844% ( 168) 00:08:11.872 9225.452 - 9275.865: 42.0078% ( 195) 00:08:11.872 9275.865 - 9326.277: 43.6484% ( 210) 00:08:11.872 9326.277 - 9376.689: 45.0938% ( 185) 00:08:11.872 9376.689 - 9427.102: 47.0938% ( 256) 00:08:11.872 9427.102 - 9477.514: 48.9531% ( 238) 00:08:11.872 9477.514 - 9527.926: 50.2891% ( 171) 00:08:11.872 9527.926 - 9578.338: 51.5312% ( 159) 00:08:11.872 9578.338 - 9628.751: 52.7969% ( 162) 00:08:11.872 9628.751 - 9679.163: 54.1719% ( 176) 00:08:11.872 9679.163 - 9729.575: 55.4922% ( 169) 00:08:11.872 9729.575 - 9779.988: 56.4531% ( 123) 00:08:11.872 9779.988 - 9830.400: 57.6875% ( 158) 00:08:11.872 9830.400 - 9880.812: 59.1016% ( 181) 00:08:11.872 9880.812 - 9931.225: 60.4922% ( 178) 00:08:11.872 9931.225 - 9981.637: 61.7656% ( 163) 00:08:11.872 9981.637 - 10032.049: 62.9688% ( 154) 00:08:11.872 10032.049 - 10082.462: 64.4297% ( 187) 00:08:11.872 10082.462 - 10132.874: 65.4688% ( 133) 00:08:11.872 10132.874 - 10183.286: 66.5625% ( 140) 00:08:11.872 10183.286 - 10233.698: 67.6484% ( 139) 00:08:11.872 10233.698 - 10284.111: 68.4219% ( 99) 00:08:11.872 10284.111 - 10334.523: 69.2578% ( 107) 00:08:11.872 10334.523 - 10384.935: 70.1328% ( 112) 00:08:11.872 10384.935 - 10435.348: 70.9531% ( 105) 00:08:11.872 10435.348 - 10485.760: 71.5859% ( 81) 00:08:11.872 10485.760 - 10536.172: 72.0391% ( 58) 00:08:11.873 10536.172 - 10586.585: 72.4375% ( 51) 00:08:11.873 10586.585 - 10636.997: 72.8359% ( 51) 00:08:11.873 10636.997 - 10687.409: 73.3594% ( 67) 00:08:11.873 10687.409 - 10737.822: 73.7188% ( 46) 00:08:11.873 10737.822 - 10788.234: 74.0000% ( 36) 00:08:11.873 10788.234 - 10838.646: 74.2891% ( 37) 00:08:11.873 10838.646 - 10889.058: 74.5703% ( 36) 00:08:11.873 10889.058 - 10939.471: 74.8516% ( 36) 00:08:11.873 10939.471 - 10989.883: 75.1641% ( 40) 00:08:11.873 10989.883 - 11040.295: 75.4922% ( 42) 00:08:11.873 11040.295 - 11090.708: 75.7812% ( 37) 00:08:11.873 11090.708 - 11141.120: 76.0000% ( 28) 00:08:11.873 11141.120 - 11191.532: 76.2578% ( 33) 00:08:11.873 11191.532 - 11241.945: 76.5234% ( 34) 00:08:11.873 11241.945 - 11292.357: 76.8359% ( 40) 00:08:11.873 11292.357 - 11342.769: 77.1953% ( 46) 00:08:11.873 11342.769 - 11393.182: 77.6250% ( 55) 00:08:11.873 11393.182 - 11443.594: 78.1406% ( 66) 00:08:11.873 11443.594 - 11494.006: 78.7109% ( 73) 00:08:11.873 11494.006 - 11544.418: 79.4141% ( 90) 00:08:11.873 11544.418 - 11594.831: 80.0312% ( 79) 00:08:11.873 11594.831 - 11645.243: 80.5547% ( 67) 00:08:11.873 11645.243 - 11695.655: 81.3281% ( 99) 00:08:11.873 11695.655 - 11746.068: 81.9531% ( 80) 00:08:11.873 11746.068 - 11796.480: 82.5547% ( 77) 00:08:11.873 11796.480 - 11846.892: 83.1328% ( 74) 00:08:11.873 11846.892 - 11897.305: 83.6250% ( 63) 00:08:11.873 11897.305 - 11947.717: 84.3672% ( 95) 00:08:11.873 11947.717 - 11998.129: 85.0000% ( 81) 00:08:11.873 11998.129 - 12048.542: 85.4766% ( 61) 00:08:11.873 12048.542 - 12098.954: 86.0703% ( 76) 00:08:11.873 12098.954 - 12149.366: 86.5859% ( 66) 00:08:11.873 12149.366 - 12199.778: 87.0234% ( 56) 00:08:11.873 12199.778 - 12250.191: 87.5078% ( 62) 00:08:11.873 12250.191 - 12300.603: 87.8984% ( 50) 00:08:11.873 12300.603 - 12351.015: 88.3438% ( 57) 00:08:11.873 12351.015 - 12401.428: 88.6875% ( 44) 00:08:11.873 12401.428 - 12451.840: 89.1250% ( 56) 00:08:11.873 12451.840 - 12502.252: 89.5547% ( 55) 00:08:11.873 12502.252 - 12552.665: 89.9531% ( 51) 00:08:11.873 12552.665 - 12603.077: 90.2734% ( 41) 00:08:11.873 12603.077 - 12653.489: 90.5391% ( 34) 00:08:11.873 12653.489 - 12703.902: 90.7812% ( 31) 00:08:11.873 12703.902 - 12754.314: 91.1328% ( 45) 00:08:11.873 12754.314 - 12804.726: 91.4062% ( 35) 00:08:11.873 12804.726 - 12855.138: 91.7109% ( 39) 00:08:11.873 12855.138 - 12905.551: 92.0469% ( 43) 00:08:11.873 12905.551 - 13006.375: 92.6250% ( 74) 00:08:11.873 13006.375 - 13107.200: 93.2812% ( 84) 00:08:11.873 13107.200 - 13208.025: 93.8594% ( 74) 00:08:11.873 13208.025 - 13308.849: 94.3750% ( 66) 00:08:11.873 13308.849 - 13409.674: 94.9141% ( 69) 00:08:11.873 13409.674 - 13510.498: 95.3203% ( 52) 00:08:11.873 13510.498 - 13611.323: 95.6328% ( 40) 00:08:11.873 13611.323 - 13712.148: 95.9141% ( 36) 00:08:11.873 13712.148 - 13812.972: 96.2031% ( 37) 00:08:11.873 13812.972 - 13913.797: 96.3750% ( 22) 00:08:11.873 13913.797 - 14014.622: 96.4766% ( 13) 00:08:11.873 14014.622 - 14115.446: 96.6328% ( 20) 00:08:11.873 14115.446 - 14216.271: 96.7344% ( 13) 00:08:11.873 14216.271 - 14317.095: 96.8359% ( 13) 00:08:11.873 14317.095 - 14417.920: 97.0312% ( 25) 00:08:11.873 14417.920 - 14518.745: 97.2656% ( 30) 00:08:11.873 14518.745 - 14619.569: 97.6016% ( 43) 00:08:11.873 14619.569 - 14720.394: 97.7031% ( 13) 00:08:11.873 14720.394 - 14821.218: 97.7969% ( 12) 00:08:11.873 14821.218 - 14922.043: 97.9531% ( 20) 00:08:11.873 14922.043 - 15022.868: 98.0547% ( 13) 00:08:11.873 15022.868 - 15123.692: 98.1641% ( 14) 00:08:11.873 15123.692 - 15224.517: 98.2734% ( 14) 00:08:11.873 15224.517 - 15325.342: 98.3281% ( 7) 00:08:11.873 15325.342 - 15426.166: 98.4141% ( 11) 00:08:11.873 15426.166 - 15526.991: 98.5156% ( 13) 00:08:11.873 15526.991 - 15627.815: 98.6172% ( 13) 00:08:11.873 15627.815 - 15728.640: 98.6875% ( 9) 00:08:11.873 15728.640 - 15829.465: 98.7422% ( 7) 00:08:11.873 15829.465 - 15930.289: 98.7891% ( 6) 00:08:11.873 15930.289 - 16031.114: 98.8359% ( 6) 00:08:11.873 16031.114 - 16131.938: 98.8828% ( 6) 00:08:11.873 16131.938 - 16232.763: 98.9297% ( 6) 00:08:11.873 16232.763 - 16333.588: 98.9766% ( 6) 00:08:11.873 16333.588 - 16434.412: 99.0000% ( 3) 00:08:11.873 17946.782 - 18047.606: 99.0234% ( 3) 00:08:11.873 18047.606 - 18148.431: 99.0703% ( 6) 00:08:11.873 18148.431 - 18249.255: 99.1016% ( 4) 00:08:11.873 18249.255 - 18350.080: 99.1406% ( 5) 00:08:11.873 18350.080 - 18450.905: 99.1719% ( 4) 00:08:11.873 18450.905 - 18551.729: 99.2188% ( 6) 00:08:11.873 18652.554 - 18753.378: 99.2344% ( 2) 00:08:11.873 18753.378 - 18854.203: 99.2578% ( 3) 00:08:11.873 18854.203 - 18955.028: 99.2891% ( 4) 00:08:11.873 18955.028 - 19055.852: 99.3125% ( 3) 00:08:11.873 19055.852 - 19156.677: 99.3438% ( 4) 00:08:11.873 19156.677 - 19257.502: 99.3750% ( 4) 00:08:11.873 19257.502 - 19358.326: 99.4062% ( 4) 00:08:11.873 19358.326 - 19459.151: 99.4375% ( 4) 00:08:11.873 19459.151 - 19559.975: 99.4609% ( 3) 00:08:11.873 19559.975 - 19660.800: 99.5000% ( 5) 00:08:11.873 23693.785 - 23794.609: 99.5078% ( 1) 00:08:11.873 23895.434 - 23996.258: 99.5234% ( 2) 00:08:11.873 23996.258 - 24097.083: 99.5312% ( 1) 00:08:11.873 24097.083 - 24197.908: 99.5859% ( 7) 00:08:11.873 24298.732 - 24399.557: 99.5938% ( 1) 00:08:11.873 24399.557 - 24500.382: 99.6094% ( 2) 00:08:11.873 24500.382 - 24601.206: 99.6484% ( 5) 00:08:11.873 24601.206 - 24702.031: 99.7188% ( 9) 00:08:11.873 24702.031 - 24802.855: 99.8047% ( 11) 00:08:11.873 24802.855 - 24903.680: 99.8750% ( 9) 00:08:11.873 24903.680 - 25004.505: 99.9062% ( 4) 00:08:11.873 25004.505 - 25105.329: 99.9375% ( 4) 00:08:11.873 25105.329 - 25206.154: 99.9766% ( 5) 00:08:11.873 25206.154 - 25306.978: 100.0000% ( 3) 00:08:11.873 00:08:11.873 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:11.873 ============================================================================== 00:08:11.873 Range in us Cumulative IO count 00:08:11.873 4234.634 - 4259.840: 0.0078% ( 1) 00:08:11.873 4436.283 - 4461.489: 0.0703% ( 8) 00:08:11.873 4461.489 - 4486.695: 0.1641% ( 12) 00:08:11.873 4486.695 - 4511.902: 0.2344% ( 9) 00:08:11.873 4511.902 - 4537.108: 0.3516% ( 15) 00:08:11.873 4537.108 - 4562.314: 0.3750% ( 3) 00:08:11.873 4562.314 - 4587.520: 0.3906% ( 2) 00:08:11.873 4587.520 - 4612.726: 0.4062% ( 2) 00:08:11.873 4612.726 - 4637.932: 0.4219% ( 2) 00:08:11.873 4637.932 - 4663.138: 0.4375% ( 2) 00:08:11.873 4663.138 - 4688.345: 0.4531% ( 2) 00:08:11.873 4688.345 - 4713.551: 0.4609% ( 1) 00:08:11.873 4713.551 - 4738.757: 0.4766% ( 2) 00:08:11.873 4738.757 - 4763.963: 0.4922% ( 2) 00:08:11.873 4763.963 - 4789.169: 0.5000% ( 1) 00:08:11.873 7158.548 - 7208.960: 0.5078% ( 1) 00:08:11.873 7360.197 - 7410.609: 0.5156% ( 1) 00:08:11.873 7410.609 - 7461.022: 0.5859% ( 9) 00:08:11.873 7461.022 - 7511.434: 0.7109% ( 16) 00:08:11.873 7511.434 - 7561.846: 0.9531% ( 31) 00:08:11.873 7561.846 - 7612.258: 1.3281% ( 48) 00:08:11.873 7612.258 - 7662.671: 1.5859% ( 33) 00:08:11.873 7662.671 - 7713.083: 1.8984% ( 40) 00:08:11.873 7713.083 - 7763.495: 2.2656% ( 47) 00:08:11.873 7763.495 - 7813.908: 2.8750% ( 78) 00:08:11.873 7813.908 - 7864.320: 3.7812% ( 116) 00:08:11.873 7864.320 - 7914.732: 5.1406% ( 174) 00:08:11.873 7914.732 - 7965.145: 6.5859% ( 185) 00:08:11.873 7965.145 - 8015.557: 8.3438% ( 225) 00:08:11.873 8015.557 - 8065.969: 10.5312% ( 280) 00:08:11.873 8065.969 - 8116.382: 12.7422% ( 283) 00:08:11.873 8116.382 - 8166.794: 14.8516% ( 270) 00:08:11.873 8166.794 - 8217.206: 16.8594% ( 257) 00:08:11.873 8217.206 - 8267.618: 18.6094% ( 224) 00:08:11.873 8267.618 - 8318.031: 20.6172% ( 257) 00:08:11.873 8318.031 - 8368.443: 22.6484% ( 260) 00:08:11.873 8368.443 - 8418.855: 24.2891% ( 210) 00:08:11.873 8418.855 - 8469.268: 25.7344% ( 185) 00:08:11.873 8469.268 - 8519.680: 27.1641% ( 183) 00:08:11.873 8519.680 - 8570.092: 28.4531% ( 165) 00:08:11.873 8570.092 - 8620.505: 29.6016% ( 147) 00:08:11.873 8620.505 - 8670.917: 30.5312% ( 119) 00:08:11.873 8670.917 - 8721.329: 31.2500% ( 92) 00:08:11.873 8721.329 - 8771.742: 32.0469% ( 102) 00:08:11.873 8771.742 - 8822.154: 32.7109% ( 85) 00:08:11.873 8822.154 - 8872.566: 33.3906% ( 87) 00:08:11.873 8872.566 - 8922.978: 34.1562% ( 98) 00:08:11.873 8922.978 - 8973.391: 35.3125% ( 148) 00:08:11.873 8973.391 - 9023.803: 36.4297% ( 143) 00:08:11.873 9023.803 - 9074.215: 37.5156% ( 139) 00:08:11.873 9074.215 - 9124.628: 38.6406% ( 144) 00:08:11.873 9124.628 - 9175.040: 40.0547% ( 181) 00:08:11.873 9175.040 - 9225.452: 41.3672% ( 168) 00:08:11.873 9225.452 - 9275.865: 42.5469% ( 151) 00:08:11.873 9275.865 - 9326.277: 43.9453% ( 179) 00:08:11.873 9326.277 - 9376.689: 45.3047% ( 174) 00:08:11.873 9376.689 - 9427.102: 47.1016% ( 230) 00:08:11.873 9427.102 - 9477.514: 48.1719% ( 137) 00:08:11.873 9477.514 - 9527.926: 49.5469% ( 176) 00:08:11.873 9527.926 - 9578.338: 51.0703% ( 195) 00:08:11.873 9578.338 - 9628.751: 52.5391% ( 188) 00:08:11.873 9628.751 - 9679.163: 53.7812% ( 159) 00:08:11.873 9679.163 - 9729.575: 54.8906% ( 142) 00:08:11.873 9729.575 - 9779.988: 56.0391% ( 147) 00:08:11.873 9779.988 - 9830.400: 57.3828% ( 172) 00:08:11.873 9830.400 - 9880.812: 58.7734% ( 178) 00:08:11.873 9880.812 - 9931.225: 60.3516% ( 202) 00:08:11.873 9931.225 - 9981.637: 61.7422% ( 178) 00:08:11.873 9981.637 - 10032.049: 63.1016% ( 174) 00:08:11.873 10032.049 - 10082.462: 64.2734% ( 150) 00:08:11.873 10082.462 - 10132.874: 65.5078% ( 158) 00:08:11.874 10132.874 - 10183.286: 66.6953% ( 152) 00:08:11.874 10183.286 - 10233.698: 67.6328% ( 120) 00:08:11.874 10233.698 - 10284.111: 68.5547% ( 118) 00:08:11.874 10284.111 - 10334.523: 69.3984% ( 108) 00:08:11.874 10334.523 - 10384.935: 70.2891% ( 114) 00:08:11.874 10384.935 - 10435.348: 70.8672% ( 74) 00:08:11.874 10435.348 - 10485.760: 71.4219% ( 71) 00:08:11.874 10485.760 - 10536.172: 71.9375% ( 66) 00:08:11.874 10536.172 - 10586.585: 72.4609% ( 67) 00:08:11.874 10586.585 - 10636.997: 72.9141% ( 58) 00:08:11.874 10636.997 - 10687.409: 73.2422% ( 42) 00:08:11.874 10687.409 - 10737.822: 73.6328% ( 50) 00:08:11.874 10737.822 - 10788.234: 73.9141% ( 36) 00:08:11.874 10788.234 - 10838.646: 74.2031% ( 37) 00:08:11.874 10838.646 - 10889.058: 74.4609% ( 33) 00:08:11.874 10889.058 - 10939.471: 74.7266% ( 34) 00:08:11.874 10939.471 - 10989.883: 74.9609% ( 30) 00:08:11.874 10989.883 - 11040.295: 75.2969% ( 43) 00:08:11.874 11040.295 - 11090.708: 75.6172% ( 41) 00:08:11.874 11090.708 - 11141.120: 75.9141% ( 38) 00:08:11.874 11141.120 - 11191.532: 76.3359% ( 54) 00:08:11.874 11191.532 - 11241.945: 76.6641% ( 42) 00:08:11.874 11241.945 - 11292.357: 77.0859% ( 54) 00:08:11.874 11292.357 - 11342.769: 77.7656% ( 87) 00:08:11.874 11342.769 - 11393.182: 78.3359% ( 73) 00:08:11.874 11393.182 - 11443.594: 78.9922% ( 84) 00:08:11.874 11443.594 - 11494.006: 79.4844% ( 63) 00:08:11.874 11494.006 - 11544.418: 79.9453% ( 59) 00:08:11.874 11544.418 - 11594.831: 80.4609% ( 66) 00:08:11.874 11594.831 - 11645.243: 80.9922% ( 68) 00:08:11.874 11645.243 - 11695.655: 81.5234% ( 68) 00:08:11.874 11695.655 - 11746.068: 82.0547% ( 68) 00:08:11.874 11746.068 - 11796.480: 82.5781% ( 67) 00:08:11.874 11796.480 - 11846.892: 83.1406% ( 72) 00:08:11.874 11846.892 - 11897.305: 83.6484% ( 65) 00:08:11.874 11897.305 - 11947.717: 84.1719% ( 67) 00:08:11.874 11947.717 - 11998.129: 84.6641% ( 63) 00:08:11.874 11998.129 - 12048.542: 85.2656% ( 77) 00:08:11.874 12048.542 - 12098.954: 85.7812% ( 66) 00:08:11.874 12098.954 - 12149.366: 86.3750% ( 76) 00:08:11.874 12149.366 - 12199.778: 86.8047% ( 55) 00:08:11.874 12199.778 - 12250.191: 87.2031% ( 51) 00:08:11.874 12250.191 - 12300.603: 87.6328% ( 55) 00:08:11.874 12300.603 - 12351.015: 88.0547% ( 54) 00:08:11.874 12351.015 - 12401.428: 88.5000% ( 57) 00:08:11.874 12401.428 - 12451.840: 89.0156% ( 66) 00:08:11.874 12451.840 - 12502.252: 89.3750% ( 46) 00:08:11.874 12502.252 - 12552.665: 89.7188% ( 44) 00:08:11.874 12552.665 - 12603.077: 90.1406% ( 54) 00:08:11.874 12603.077 - 12653.489: 90.5391% ( 51) 00:08:11.874 12653.489 - 12703.902: 90.8125% ( 35) 00:08:11.874 12703.902 - 12754.314: 91.0859% ( 35) 00:08:11.874 12754.314 - 12804.726: 91.3594% ( 35) 00:08:11.874 12804.726 - 12855.138: 91.7812% ( 54) 00:08:11.874 12855.138 - 12905.551: 92.1328% ( 45) 00:08:11.874 12905.551 - 13006.375: 92.7969% ( 85) 00:08:11.874 13006.375 - 13107.200: 93.3750% ( 74) 00:08:11.874 13107.200 - 13208.025: 93.9531% ( 74) 00:08:11.874 13208.025 - 13308.849: 94.3828% ( 55) 00:08:11.874 13308.849 - 13409.674: 94.7812% ( 51) 00:08:11.874 13409.674 - 13510.498: 95.1797% ( 51) 00:08:11.874 13510.498 - 13611.323: 95.6016% ( 54) 00:08:11.874 13611.323 - 13712.148: 95.9062% ( 39) 00:08:11.874 13712.148 - 13812.972: 96.0625% ( 20) 00:08:11.874 13812.972 - 13913.797: 96.2734% ( 27) 00:08:11.874 13913.797 - 14014.622: 96.4297% ( 20) 00:08:11.874 14014.622 - 14115.446: 96.5469% ( 15) 00:08:11.874 14115.446 - 14216.271: 96.6250% ( 10) 00:08:11.874 14216.271 - 14317.095: 96.7344% ( 14) 00:08:11.874 14317.095 - 14417.920: 96.8281% ( 12) 00:08:11.874 14417.920 - 14518.745: 96.9297% ( 13) 00:08:11.874 14518.745 - 14619.569: 97.0938% ( 21) 00:08:11.874 14619.569 - 14720.394: 97.2500% ( 20) 00:08:11.874 14720.394 - 14821.218: 97.6094% ( 46) 00:08:11.874 14821.218 - 14922.043: 98.0234% ( 53) 00:08:11.874 14922.043 - 15022.868: 98.1562% ( 17) 00:08:11.874 15022.868 - 15123.692: 98.2812% ( 16) 00:08:11.874 15123.692 - 15224.517: 98.3359% ( 7) 00:08:11.874 15224.517 - 15325.342: 98.3750% ( 5) 00:08:11.874 15325.342 - 15426.166: 98.4609% ( 11) 00:08:11.874 15426.166 - 15526.991: 98.5547% ( 12) 00:08:11.874 15526.991 - 15627.815: 98.6562% ( 13) 00:08:11.874 15627.815 - 15728.640: 98.6953% ( 5) 00:08:11.874 15728.640 - 15829.465: 98.7344% ( 5) 00:08:11.874 15829.465 - 15930.289: 98.7734% ( 5) 00:08:11.874 15930.289 - 16031.114: 98.8203% ( 6) 00:08:11.874 16031.114 - 16131.938: 98.8672% ( 6) 00:08:11.874 16131.938 - 16232.763: 98.9062% ( 5) 00:08:11.874 16232.763 - 16333.588: 98.9531% ( 6) 00:08:11.874 16333.588 - 16434.412: 99.0000% ( 6) 00:08:11.874 17745.132 - 17845.957: 99.0078% ( 1) 00:08:11.874 18047.606 - 18148.431: 99.0469% ( 5) 00:08:11.874 18148.431 - 18249.255: 99.0859% ( 5) 00:08:11.874 18249.255 - 18350.080: 99.1094% ( 3) 00:08:11.874 18350.080 - 18450.905: 99.1562% ( 6) 00:08:11.874 18450.905 - 18551.729: 99.2031% ( 6) 00:08:11.874 18551.729 - 18652.554: 99.2734% ( 9) 00:08:11.874 18652.554 - 18753.378: 99.3125% ( 5) 00:08:11.874 18753.378 - 18854.203: 99.3516% ( 5) 00:08:11.874 18854.203 - 18955.028: 99.3984% ( 6) 00:08:11.874 18955.028 - 19055.852: 99.4375% ( 5) 00:08:11.874 19055.852 - 19156.677: 99.4766% ( 5) 00:08:11.874 19156.677 - 19257.502: 99.5000% ( 3) 00:08:11.874 23492.135 - 23592.960: 99.5156% ( 2) 00:08:11.874 23592.960 - 23693.785: 99.5781% ( 8) 00:08:11.874 23693.785 - 23794.609: 99.7344% ( 20) 00:08:11.874 23794.609 - 23895.434: 99.7578% ( 3) 00:08:11.874 23996.258 - 24097.083: 99.7734% ( 2) 00:08:11.874 24097.083 - 24197.908: 99.8203% ( 6) 00:08:11.874 24197.908 - 24298.732: 99.8750% ( 7) 00:08:11.874 24298.732 - 24399.557: 99.9141% ( 5) 00:08:11.874 24399.557 - 24500.382: 99.9688% ( 7) 00:08:11.874 24500.382 - 24601.206: 100.0000% ( 4) 00:08:11.874 00:08:11.874 08:52:05 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:11.874 00:08:11.874 real 0m2.470s 00:08:11.874 user 0m2.160s 00:08:11.874 sys 0m0.187s 00:08:11.874 08:52:05 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.874 ************************************ 00:08:11.874 END TEST nvme_perf 00:08:11.874 08:52:05 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:11.874 ************************************ 00:08:11.874 08:52:05 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:11.874 08:52:05 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:11.874 08:52:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.874 08:52:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.874 ************************************ 00:08:11.874 START TEST nvme_hello_world 00:08:11.874 ************************************ 00:08:11.874 08:52:05 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:11.874 Initializing NVMe Controllers 00:08:11.874 Attached to 0000:00:10.0 00:08:11.874 Namespace ID: 1 size: 6GB 00:08:11.874 Attached to 0000:00:11.0 00:08:11.874 Namespace ID: 1 size: 5GB 00:08:11.874 Attached to 0000:00:13.0 00:08:11.874 Namespace ID: 1 size: 1GB 00:08:11.874 Attached to 0000:00:12.0 00:08:11.874 Namespace ID: 1 size: 4GB 00:08:11.874 Namespace ID: 2 size: 4GB 00:08:11.874 Namespace ID: 3 size: 4GB 00:08:11.874 Initialization complete. 00:08:11.874 INFO: using host memory buffer for IO 00:08:11.874 Hello world! 00:08:11.874 INFO: using host memory buffer for IO 00:08:11.874 Hello world! 00:08:11.874 INFO: using host memory buffer for IO 00:08:11.874 Hello world! 00:08:11.874 INFO: using host memory buffer for IO 00:08:11.874 Hello world! 00:08:11.874 INFO: using host memory buffer for IO 00:08:11.874 Hello world! 00:08:11.874 INFO: using host memory buffer for IO 00:08:11.874 Hello world! 00:08:11.874 00:08:11.874 real 0m0.191s 00:08:11.874 user 0m0.056s 00:08:11.874 sys 0m0.092s 00:08:11.874 08:52:05 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.874 08:52:05 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:11.874 ************************************ 00:08:11.874 END TEST nvme_hello_world 00:08:11.874 ************************************ 00:08:11.874 08:52:05 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:11.874 08:52:05 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.874 08:52:05 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.874 08:52:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.874 ************************************ 00:08:11.874 START TEST nvme_sgl 00:08:11.874 ************************************ 00:08:11.874 08:52:05 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:12.136 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:12.136 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:12.136 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:12.136 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:12.136 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:12.136 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:12.136 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:12.136 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:12.136 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:12.136 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:12.136 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:12.136 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:12.136 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:12.136 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:12.136 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:12.136 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:12.136 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:12.136 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:12.136 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:12.136 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:12.136 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:12.136 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:12.136 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:12.136 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:12.136 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:12.136 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:12.136 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:12.136 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:12.136 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:12.136 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:12.136 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:12.136 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:12.136 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:12.136 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:12.136 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:12.136 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:12.136 NVMe Readv/Writev Request test 00:08:12.136 Attached to 0000:00:10.0 00:08:12.136 Attached to 0000:00:11.0 00:08:12.136 Attached to 0000:00:13.0 00:08:12.136 Attached to 0000:00:12.0 00:08:12.136 0000:00:10.0: build_io_request_2 test passed 00:08:12.136 0000:00:10.0: build_io_request_4 test passed 00:08:12.136 0000:00:10.0: build_io_request_5 test passed 00:08:12.136 0000:00:10.0: build_io_request_6 test passed 00:08:12.136 0000:00:10.0: build_io_request_7 test passed 00:08:12.136 0000:00:10.0: build_io_request_10 test passed 00:08:12.136 0000:00:11.0: build_io_request_2 test passed 00:08:12.136 0000:00:11.0: build_io_request_4 test passed 00:08:12.136 0000:00:11.0: build_io_request_5 test passed 00:08:12.136 0000:00:11.0: build_io_request_6 test passed 00:08:12.136 0000:00:11.0: build_io_request_7 test passed 00:08:12.136 0000:00:11.0: build_io_request_10 test passed 00:08:12.136 Cleaning up... 00:08:12.136 00:08:12.136 real 0m0.262s 00:08:12.136 user 0m0.121s 00:08:12.136 sys 0m0.094s 00:08:12.136 08:52:06 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.136 08:52:06 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:12.137 ************************************ 00:08:12.137 END TEST nvme_sgl 00:08:12.137 ************************************ 00:08:12.137 08:52:06 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:12.137 08:52:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:12.137 08:52:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.137 08:52:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.137 ************************************ 00:08:12.137 START TEST nvme_e2edp 00:08:12.137 ************************************ 00:08:12.137 08:52:06 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:12.400 NVMe Write/Read with End-to-End data protection test 00:08:12.400 Attached to 0000:00:10.0 00:08:12.400 Attached to 0000:00:11.0 00:08:12.400 Attached to 0000:00:13.0 00:08:12.400 Attached to 0000:00:12.0 00:08:12.400 Cleaning up... 00:08:12.400 00:08:12.400 real 0m0.196s 00:08:12.400 user 0m0.065s 00:08:12.400 sys 0m0.083s 00:08:12.400 08:52:06 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.400 08:52:06 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:12.400 ************************************ 00:08:12.400 END TEST nvme_e2edp 00:08:12.400 ************************************ 00:08:12.400 08:52:06 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:12.400 08:52:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:12.400 08:52:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.400 08:52:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.400 ************************************ 00:08:12.400 START TEST nvme_reserve 00:08:12.400 ************************************ 00:08:12.400 08:52:06 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:12.661 ===================================================== 00:08:12.661 NVMe Controller at PCI bus 0, device 16, function 0 00:08:12.661 ===================================================== 00:08:12.661 Reservations: Not Supported 00:08:12.661 ===================================================== 00:08:12.661 NVMe Controller at PCI bus 0, device 17, function 0 00:08:12.661 ===================================================== 00:08:12.661 Reservations: Not Supported 00:08:12.661 ===================================================== 00:08:12.661 NVMe Controller at PCI bus 0, device 19, function 0 00:08:12.661 ===================================================== 00:08:12.661 Reservations: Not Supported 00:08:12.661 ===================================================== 00:08:12.661 NVMe Controller at PCI bus 0, device 18, function 0 00:08:12.661 ===================================================== 00:08:12.661 Reservations: Not Supported 00:08:12.661 Reservation test passed 00:08:12.661 00:08:12.661 real 0m0.183s 00:08:12.661 user 0m0.055s 00:08:12.661 sys 0m0.084s 00:08:12.661 08:52:06 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.661 08:52:06 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:12.661 ************************************ 00:08:12.661 END TEST nvme_reserve 00:08:12.661 ************************************ 00:08:12.661 08:52:06 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:12.661 08:52:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:12.661 08:52:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.661 08:52:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.661 ************************************ 00:08:12.661 START TEST nvme_err_injection 00:08:12.661 ************************************ 00:08:12.661 08:52:06 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:12.923 NVMe Error Injection test 00:08:12.923 Attached to 0000:00:10.0 00:08:12.923 Attached to 0000:00:11.0 00:08:12.923 Attached to 0000:00:13.0 00:08:12.923 Attached to 0000:00:12.0 00:08:12.923 0000:00:13.0: get features failed as expected 00:08:12.923 0000:00:12.0: get features failed as expected 00:08:12.923 0000:00:10.0: get features failed as expected 00:08:12.923 0000:00:11.0: get features failed as expected 00:08:12.923 0000:00:10.0: get features successfully as expected 00:08:12.923 0000:00:11.0: get features successfully as expected 00:08:12.923 0000:00:13.0: get features successfully as expected 00:08:12.923 0000:00:12.0: get features successfully as expected 00:08:12.923 0000:00:11.0: read failed as expected 00:08:12.923 0000:00:13.0: read failed as expected 00:08:12.923 0000:00:12.0: read failed as expected 00:08:12.923 0000:00:10.0: read failed as expected 00:08:12.923 0000:00:10.0: read successfully as expected 00:08:12.923 0000:00:11.0: read successfully as expected 00:08:12.923 0000:00:13.0: read successfully as expected 00:08:12.923 0000:00:12.0: read successfully as expected 00:08:12.923 Cleaning up... 00:08:12.923 00:08:12.923 real 0m0.195s 00:08:12.923 user 0m0.063s 00:08:12.923 sys 0m0.084s 00:08:12.923 08:52:06 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.923 08:52:06 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:12.923 ************************************ 00:08:12.923 END TEST nvme_err_injection 00:08:12.923 ************************************ 00:08:12.923 08:52:06 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:12.923 08:52:06 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:12.923 08:52:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.923 08:52:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.923 ************************************ 00:08:12.923 START TEST nvme_overhead 00:08:12.923 ************************************ 00:08:12.923 08:52:06 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:14.311 Initializing NVMe Controllers 00:08:14.311 Attached to 0000:00:10.0 00:08:14.311 Attached to 0000:00:11.0 00:08:14.311 Attached to 0000:00:13.0 00:08:14.311 Attached to 0000:00:12.0 00:08:14.311 Initialization complete. Launching workers. 00:08:14.311 submit (in ns) avg, min, max = 12717.3, 10910.0, 92905.4 00:08:14.311 complete (in ns) avg, min, max = 8515.7, 7214.6, 215750.8 00:08:14.311 00:08:14.311 Submit histogram 00:08:14.311 ================ 00:08:14.311 Range in us Cumulative Count 00:08:14.311 10.880 - 10.929: 0.0121% ( 1) 00:08:14.311 11.225 - 11.274: 0.0241% ( 1) 00:08:14.311 11.372 - 11.422: 0.0362% ( 1) 00:08:14.311 11.422 - 11.471: 0.0603% ( 2) 00:08:14.311 11.520 - 11.569: 0.0724% ( 1) 00:08:14.311 11.569 - 11.618: 0.0845% ( 1) 00:08:14.311 11.618 - 11.668: 0.1810% ( 8) 00:08:14.311 11.668 - 11.717: 0.4344% ( 21) 00:08:14.311 11.717 - 11.766: 1.5082% ( 89) 00:08:14.311 11.766 - 11.815: 4.5125% ( 249) 00:08:14.311 11.815 - 11.865: 9.4836% ( 412) 00:08:14.311 11.865 - 11.914: 17.6641% ( 678) 00:08:14.311 11.914 - 11.963: 27.3407% ( 802) 00:08:14.311 11.963 - 12.012: 38.2843% ( 907) 00:08:14.311 12.012 - 12.062: 48.1419% ( 817) 00:08:14.311 12.062 - 12.111: 56.7568% ( 714) 00:08:14.311 12.111 - 12.160: 63.6824% ( 574) 00:08:14.311 12.160 - 12.209: 69.3292% ( 468) 00:08:14.311 12.209 - 12.258: 73.9020% ( 379) 00:08:14.311 12.258 - 12.308: 77.8234% ( 325) 00:08:14.311 12.308 - 12.357: 80.4416% ( 217) 00:08:14.311 12.357 - 12.406: 82.5893% ( 178) 00:08:14.311 12.406 - 12.455: 84.3147% ( 143) 00:08:14.311 12.455 - 12.505: 85.4006% ( 90) 00:08:14.311 12.505 - 12.554: 86.3417% ( 78) 00:08:14.311 12.554 - 12.603: 86.8605% ( 43) 00:08:14.311 12.603 - 12.702: 87.8016% ( 78) 00:08:14.311 12.702 - 12.800: 88.3446% ( 45) 00:08:14.311 12.800 - 12.898: 88.9599% ( 51) 00:08:14.311 12.898 - 12.997: 89.3702% ( 34) 00:08:14.311 12.997 - 13.095: 89.6236% ( 21) 00:08:14.311 13.095 - 13.194: 89.7563% ( 11) 00:08:14.311 13.194 - 13.292: 89.8769% ( 10) 00:08:14.311 13.292 - 13.391: 90.0338% ( 13) 00:08:14.311 13.391 - 13.489: 90.0579% ( 2) 00:08:14.311 13.489 - 13.588: 90.1424% ( 7) 00:08:14.311 13.588 - 13.686: 90.1665% ( 2) 00:08:14.311 13.686 - 13.785: 90.2027% ( 3) 00:08:14.311 13.785 - 13.883: 90.2268% ( 2) 00:08:14.311 13.883 - 13.982: 90.2751% ( 4) 00:08:14.311 13.982 - 14.080: 90.3354% ( 5) 00:08:14.311 14.080 - 14.178: 90.5043% ( 14) 00:08:14.311 14.178 - 14.277: 90.6853% ( 15) 00:08:14.311 14.277 - 14.375: 90.9146% ( 19) 00:08:14.311 14.375 - 14.474: 91.2162% ( 25) 00:08:14.311 14.474 - 14.572: 91.5058% ( 24) 00:08:14.311 14.572 - 14.671: 91.7350% ( 19) 00:08:14.311 14.671 - 14.769: 91.9281% ( 16) 00:08:14.311 14.769 - 14.868: 92.2177% ( 24) 00:08:14.311 14.868 - 14.966: 92.3504% ( 11) 00:08:14.311 14.966 - 15.065: 92.5193% ( 14) 00:08:14.311 15.065 - 15.163: 92.6279% ( 9) 00:08:14.311 15.163 - 15.262: 92.7365% ( 9) 00:08:14.311 15.262 - 15.360: 92.8209% ( 7) 00:08:14.311 15.360 - 15.458: 92.9416% ( 10) 00:08:14.311 15.458 - 15.557: 93.0985% ( 13) 00:08:14.311 15.557 - 15.655: 93.2432% ( 12) 00:08:14.311 15.655 - 15.754: 93.4725% ( 19) 00:08:14.311 15.754 - 15.852: 93.7138% ( 20) 00:08:14.311 15.852 - 15.951: 93.9672% ( 21) 00:08:14.311 15.951 - 16.049: 94.3774% ( 34) 00:08:14.311 16.049 - 16.148: 94.7032% ( 27) 00:08:14.311 16.148 - 16.246: 95.0531% ( 29) 00:08:14.311 16.246 - 16.345: 95.4392% ( 32) 00:08:14.311 16.345 - 16.443: 95.8132% ( 31) 00:08:14.311 16.443 - 16.542: 96.0183% ( 17) 00:08:14.311 16.542 - 16.640: 96.1511% ( 11) 00:08:14.311 16.640 - 16.738: 96.2838% ( 11) 00:08:14.311 16.738 - 16.837: 96.4648% ( 15) 00:08:14.311 16.837 - 16.935: 96.5372% ( 6) 00:08:14.311 16.935 - 17.034: 96.6337% ( 8) 00:08:14.311 17.034 - 17.132: 96.7061% ( 6) 00:08:14.311 17.132 - 17.231: 96.7785% ( 6) 00:08:14.311 17.231 - 17.329: 96.8267% ( 4) 00:08:14.311 17.329 - 17.428: 96.8750% ( 4) 00:08:14.311 17.526 - 17.625: 96.8871% ( 1) 00:08:14.311 17.625 - 17.723: 96.9233% ( 3) 00:08:14.311 17.723 - 17.822: 96.9353% ( 1) 00:08:14.311 17.822 - 17.920: 97.0439% ( 9) 00:08:14.311 17.920 - 18.018: 97.1646% ( 10) 00:08:14.311 18.018 - 18.117: 97.2490% ( 7) 00:08:14.311 18.117 - 18.215: 97.3697% ( 10) 00:08:14.311 18.215 - 18.314: 97.4662% ( 8) 00:08:14.311 18.314 - 18.412: 97.5145% ( 4) 00:08:14.311 18.412 - 18.511: 97.6231% ( 9) 00:08:14.311 18.511 - 18.609: 97.7075% ( 7) 00:08:14.311 18.609 - 18.708: 97.7679% ( 5) 00:08:14.311 18.708 - 18.806: 97.8764% ( 9) 00:08:14.311 18.806 - 18.905: 97.9488% ( 6) 00:08:14.311 18.905 - 19.003: 97.9971% ( 4) 00:08:14.311 19.003 - 19.102: 98.0816% ( 7) 00:08:14.311 19.102 - 19.200: 98.1660% ( 7) 00:08:14.311 19.200 - 19.298: 98.2022% ( 3) 00:08:14.311 19.298 - 19.397: 98.2505% ( 4) 00:08:14.311 19.397 - 19.495: 98.2987% ( 4) 00:08:14.311 19.495 - 19.594: 98.3591% ( 5) 00:08:14.311 19.594 - 19.692: 98.4315% ( 6) 00:08:14.311 19.692 - 19.791: 98.4435% ( 1) 00:08:14.311 19.988 - 20.086: 98.4556% ( 1) 00:08:14.311 20.185 - 20.283: 98.4797% ( 2) 00:08:14.311 20.283 - 20.382: 98.5039% ( 2) 00:08:14.311 20.382 - 20.480: 98.5280% ( 2) 00:08:14.311 20.480 - 20.578: 98.5401% ( 1) 00:08:14.311 20.578 - 20.677: 98.5521% ( 1) 00:08:14.311 20.775 - 20.874: 98.5642% ( 1) 00:08:14.311 20.874 - 20.972: 98.5883% ( 2) 00:08:14.311 20.972 - 21.071: 98.6004% ( 1) 00:08:14.311 21.071 - 21.169: 98.6125% ( 1) 00:08:14.311 21.169 - 21.268: 98.6245% ( 1) 00:08:14.311 21.366 - 21.465: 98.6486% ( 2) 00:08:14.311 21.465 - 21.563: 98.6728% ( 2) 00:08:14.311 21.563 - 21.662: 98.6969% ( 2) 00:08:14.311 21.662 - 21.760: 98.7210% ( 2) 00:08:14.311 21.957 - 22.055: 98.7331% ( 1) 00:08:14.311 22.055 - 22.154: 98.7572% ( 2) 00:08:14.311 22.646 - 22.745: 98.7693% ( 1) 00:08:14.311 22.942 - 23.040: 98.7934% ( 2) 00:08:14.311 23.040 - 23.138: 98.8055% ( 1) 00:08:14.311 23.434 - 23.532: 98.8176% ( 1) 00:08:14.312 23.631 - 23.729: 98.8296% ( 1) 00:08:14.312 23.729 - 23.828: 98.8417% ( 1) 00:08:14.312 23.828 - 23.926: 98.8538% ( 1) 00:08:14.312 24.025 - 24.123: 98.8658% ( 1) 00:08:14.312 24.123 - 24.222: 98.8779% ( 1) 00:08:14.312 24.615 - 24.714: 98.8900% ( 1) 00:08:14.312 24.714 - 24.812: 98.9020% ( 1) 00:08:14.312 24.911 - 25.009: 98.9141% ( 1) 00:08:14.312 25.108 - 25.206: 98.9262% ( 1) 00:08:14.312 25.403 - 25.600: 98.9382% ( 1) 00:08:14.312 25.600 - 25.797: 98.9503% ( 1) 00:08:14.312 25.797 - 25.994: 98.9624% ( 1) 00:08:14.312 25.994 - 26.191: 98.9744% ( 1) 00:08:14.312 26.191 - 26.388: 98.9865% ( 1) 00:08:14.312 26.585 - 26.782: 98.9986% ( 1) 00:08:14.312 27.766 - 27.963: 99.0106% ( 1) 00:08:14.312 29.932 - 30.129: 99.0227% ( 1) 00:08:14.312 30.326 - 30.523: 99.0709% ( 4) 00:08:14.312 30.523 - 30.720: 99.2519% ( 15) 00:08:14.312 30.720 - 30.917: 99.4088% ( 13) 00:08:14.312 30.917 - 31.114: 99.6018% ( 16) 00:08:14.312 31.114 - 31.311: 99.6984% ( 8) 00:08:14.312 31.311 - 31.508: 99.7466% ( 4) 00:08:14.312 31.508 - 31.705: 99.7828% ( 3) 00:08:14.312 31.705 - 31.902: 99.7949% ( 1) 00:08:14.312 34.658 - 34.855: 99.8069% ( 1) 00:08:14.312 36.234 - 36.431: 99.8190% ( 1) 00:08:14.312 38.006 - 38.203: 99.8431% ( 2) 00:08:14.312 38.400 - 38.597: 99.8552% ( 1) 00:08:14.312 40.369 - 40.566: 99.8673% ( 1) 00:08:14.312 41.157 - 41.354: 99.8793% ( 1) 00:08:14.312 47.655 - 47.852: 99.8914% ( 1) 00:08:14.312 49.428 - 49.625: 99.9155% ( 2) 00:08:14.312 49.625 - 49.822: 99.9276% ( 1) 00:08:14.312 50.412 - 50.806: 99.9397% ( 1) 00:08:14.312 51.200 - 51.594: 99.9517% ( 1) 00:08:14.312 61.440 - 61.834: 99.9638% ( 1) 00:08:14.312 69.711 - 70.105: 99.9759% ( 1) 00:08:14.312 83.102 - 83.495: 99.9879% ( 1) 00:08:14.312 92.554 - 92.948: 100.0000% ( 1) 00:08:14.312 00:08:14.312 Complete histogram 00:08:14.312 ================== 00:08:14.312 Range in us Cumulative Count 00:08:14.312 7.188 - 7.237: 0.0362% ( 3) 00:08:14.312 7.237 - 7.286: 0.1086% ( 6) 00:08:14.312 7.286 - 7.335: 0.3137% ( 17) 00:08:14.312 7.335 - 7.385: 0.6636% ( 29) 00:08:14.312 7.385 - 7.434: 1.3755% ( 59) 00:08:14.312 7.434 - 7.483: 2.2442% ( 72) 00:08:14.312 7.483 - 7.532: 2.9078% ( 55) 00:08:14.312 7.532 - 7.582: 3.6076% ( 58) 00:08:14.312 7.582 - 7.631: 4.2954% ( 57) 00:08:14.312 7.631 - 7.680: 4.7539% ( 38) 00:08:14.312 7.680 - 7.729: 5.1400% ( 32) 00:08:14.312 7.729 - 7.778: 5.3692% ( 19) 00:08:14.312 7.778 - 7.828: 6.3345% ( 80) 00:08:14.312 7.828 - 7.877: 9.8214% ( 289) 00:08:14.312 7.877 - 7.926: 16.1076% ( 521) 00:08:14.312 7.926 - 7.975: 23.9503% ( 650) 00:08:14.312 7.975 - 8.025: 35.0989% ( 924) 00:08:14.312 8.025 - 8.074: 47.4542% ( 1024) 00:08:14.312 8.074 - 8.123: 58.6993% ( 932) 00:08:14.312 8.123 - 8.172: 68.2915% ( 795) 00:08:14.312 8.172 - 8.222: 75.2896% ( 580) 00:08:14.312 8.222 - 8.271: 80.7191% ( 450) 00:08:14.312 8.271 - 8.320: 84.5077% ( 314) 00:08:14.312 8.320 - 8.369: 87.1260% ( 217) 00:08:14.312 8.369 - 8.418: 88.9479% ( 151) 00:08:14.312 8.418 - 8.468: 90.2510% ( 108) 00:08:14.312 8.468 - 8.517: 91.3369% ( 90) 00:08:14.312 8.517 - 8.566: 92.0246% ( 57) 00:08:14.312 8.566 - 8.615: 92.3142% ( 24) 00:08:14.312 8.615 - 8.665: 92.5796% ( 22) 00:08:14.312 8.665 - 8.714: 92.6882% ( 9) 00:08:14.312 8.714 - 8.763: 92.8692% ( 15) 00:08:14.312 8.763 - 8.812: 93.0019% ( 11) 00:08:14.312 8.812 - 8.862: 93.0743% ( 6) 00:08:14.312 8.862 - 8.911: 93.1467% ( 6) 00:08:14.312 8.911 - 8.960: 93.2191% ( 6) 00:08:14.312 8.960 - 9.009: 93.3398% ( 10) 00:08:14.312 9.009 - 9.058: 93.4363% ( 8) 00:08:14.312 9.058 - 9.108: 93.4725% ( 3) 00:08:14.312 9.108 - 9.157: 93.4846% ( 1) 00:08:14.312 9.157 - 9.206: 93.5087% ( 2) 00:08:14.312 9.206 - 9.255: 93.5449% ( 3) 00:08:14.312 9.255 - 9.305: 93.5569% ( 1) 00:08:14.312 9.305 - 9.354: 93.5811% ( 2) 00:08:14.312 9.354 - 9.403: 93.5931% ( 1) 00:08:14.312 9.403 - 9.452: 93.6052% ( 1) 00:08:14.312 9.452 - 9.502: 93.6414% ( 3) 00:08:14.312 9.600 - 9.649: 93.6655% ( 2) 00:08:14.312 9.649 - 9.698: 93.6776% ( 1) 00:08:14.312 9.698 - 9.748: 93.7017% ( 2) 00:08:14.312 9.797 - 9.846: 93.7138% ( 1) 00:08:14.312 9.994 - 10.043: 93.7259% ( 1) 00:08:14.312 10.092 - 10.142: 93.7500% ( 2) 00:08:14.312 10.142 - 10.191: 93.7621% ( 1) 00:08:14.312 10.191 - 10.240: 93.8103% ( 4) 00:08:14.312 10.240 - 10.289: 93.8345% ( 2) 00:08:14.312 10.289 - 10.338: 93.8465% ( 1) 00:08:14.312 10.338 - 10.388: 93.8586% ( 1) 00:08:14.312 10.388 - 10.437: 93.8707% ( 1) 00:08:14.312 10.437 - 10.486: 93.8948% ( 2) 00:08:14.312 10.486 - 10.535: 93.9189% ( 2) 00:08:14.312 10.585 - 10.634: 93.9310% ( 1) 00:08:14.312 10.634 - 10.683: 93.9431% ( 1) 00:08:14.312 10.683 - 10.732: 93.9551% ( 1) 00:08:14.312 10.732 - 10.782: 93.9913% ( 3) 00:08:14.312 10.782 - 10.831: 94.0034% ( 1) 00:08:14.312 10.831 - 10.880: 94.0396% ( 3) 00:08:14.312 10.880 - 10.929: 94.0758% ( 3) 00:08:14.312 10.929 - 10.978: 94.0999% ( 2) 00:08:14.312 10.978 - 11.028: 94.1240% ( 2) 00:08:14.312 11.028 - 11.077: 94.1482% ( 2) 00:08:14.312 11.126 - 11.175: 94.2326% ( 7) 00:08:14.312 11.175 - 11.225: 94.2809% ( 4) 00:08:14.312 11.225 - 11.274: 94.3774% ( 8) 00:08:14.312 11.274 - 11.323: 94.4619% ( 7) 00:08:14.312 11.323 - 11.372: 94.5343% ( 6) 00:08:14.312 11.372 - 11.422: 94.6670% ( 11) 00:08:14.312 11.422 - 11.471: 94.7997% ( 11) 00:08:14.312 11.471 - 11.520: 94.9566% ( 13) 00:08:14.312 11.520 - 11.569: 95.1737% ( 18) 00:08:14.312 11.569 - 11.618: 95.3185% ( 12) 00:08:14.312 11.618 - 11.668: 95.5236% ( 17) 00:08:14.312 11.668 - 11.717: 95.6684% ( 12) 00:08:14.312 11.717 - 11.766: 95.8012% ( 11) 00:08:14.312 11.766 - 11.815: 95.9701% ( 14) 00:08:14.312 11.815 - 11.865: 96.1511% ( 15) 00:08:14.312 11.865 - 11.914: 96.2838% ( 11) 00:08:14.312 11.914 - 11.963: 96.4165% ( 11) 00:08:14.312 11.963 - 12.012: 96.5492% ( 11) 00:08:14.312 12.012 - 12.062: 96.6096% ( 5) 00:08:14.312 12.062 - 12.111: 96.7061% ( 8) 00:08:14.312 12.111 - 12.160: 96.7905% ( 7) 00:08:14.312 12.160 - 12.209: 96.8509% ( 5) 00:08:14.312 12.209 - 12.258: 96.9233% ( 6) 00:08:14.312 12.258 - 12.308: 97.0077% ( 7) 00:08:14.312 12.308 - 12.357: 97.0439% ( 3) 00:08:14.312 12.357 - 12.406: 97.0922% ( 4) 00:08:14.312 12.406 - 12.455: 97.1284% ( 3) 00:08:14.312 12.455 - 12.505: 97.1887% ( 5) 00:08:14.312 12.505 - 12.554: 97.2008% ( 1) 00:08:14.312 12.554 - 12.603: 97.2370% ( 3) 00:08:14.312 12.603 - 12.702: 97.2611% ( 2) 00:08:14.312 12.702 - 12.800: 97.3214% ( 5) 00:08:14.312 12.800 - 12.898: 97.3818% ( 5) 00:08:14.312 12.898 - 12.997: 97.4059% ( 2) 00:08:14.312 12.997 - 13.095: 97.4542% ( 4) 00:08:14.312 13.095 - 13.194: 97.4903% ( 3) 00:08:14.312 13.194 - 13.292: 97.5024% ( 1) 00:08:14.312 13.292 - 13.391: 97.5145% ( 1) 00:08:14.312 13.391 - 13.489: 97.5265% ( 1) 00:08:14.312 13.489 - 13.588: 97.5507% ( 2) 00:08:14.312 13.588 - 13.686: 97.5869% ( 3) 00:08:14.312 13.686 - 13.785: 97.6472% ( 5) 00:08:14.312 13.785 - 13.883: 97.6955% ( 4) 00:08:14.312 13.883 - 13.982: 97.7075% ( 1) 00:08:14.312 13.982 - 14.080: 97.8041% ( 8) 00:08:14.312 14.080 - 14.178: 97.9006% ( 8) 00:08:14.312 14.178 - 14.277: 97.9368% ( 3) 00:08:14.312 14.277 - 14.375: 97.9609% ( 2) 00:08:14.312 14.375 - 14.474: 98.0092% ( 4) 00:08:14.312 14.474 - 14.572: 98.0695% ( 5) 00:08:14.312 14.572 - 14.671: 98.1660% ( 8) 00:08:14.312 14.671 - 14.769: 98.2384% ( 6) 00:08:14.312 14.769 - 14.868: 98.3108% ( 6) 00:08:14.312 14.868 - 14.966: 98.3470% ( 3) 00:08:14.312 14.966 - 15.065: 98.3953% ( 4) 00:08:14.312 15.065 - 15.163: 98.4435% ( 4) 00:08:14.312 15.163 - 15.262: 98.5039% ( 5) 00:08:14.312 15.262 - 15.360: 98.5642% ( 5) 00:08:14.312 15.360 - 15.458: 98.5883% ( 2) 00:08:14.312 15.458 - 15.557: 98.6125% ( 2) 00:08:14.312 15.557 - 15.655: 98.6245% ( 1) 00:08:14.312 15.655 - 15.754: 98.6486% ( 2) 00:08:14.312 15.754 - 15.852: 98.6607% ( 1) 00:08:14.312 15.852 - 15.951: 98.6848% ( 2) 00:08:14.312 15.951 - 16.049: 98.7090% ( 2) 00:08:14.312 16.049 - 16.148: 98.7210% ( 1) 00:08:14.312 16.148 - 16.246: 98.7331% ( 1) 00:08:14.312 16.345 - 16.443: 98.7693% ( 3) 00:08:14.312 16.542 - 16.640: 98.7814% ( 1) 00:08:14.312 16.640 - 16.738: 98.8176% ( 3) 00:08:14.312 17.329 - 17.428: 98.8296% ( 1) 00:08:14.312 17.723 - 17.822: 98.8417% ( 1) 00:08:14.313 18.511 - 18.609: 98.8538% ( 1) 00:08:14.313 18.609 - 18.708: 98.8658% ( 1) 00:08:14.313 19.003 - 19.102: 98.8779% ( 1) 00:08:14.313 19.102 - 19.200: 98.8900% ( 1) 00:08:14.313 20.086 - 20.185: 98.9141% ( 2) 00:08:14.313 21.169 - 21.268: 98.9262% ( 1) 00:08:14.313 21.563 - 21.662: 98.9382% ( 1) 00:08:14.313 21.858 - 21.957: 98.9503% ( 1) 00:08:14.313 22.154 - 22.252: 99.0106% ( 5) 00:08:14.313 22.252 - 22.351: 99.0951% ( 7) 00:08:14.313 22.351 - 22.449: 99.1916% ( 8) 00:08:14.313 22.449 - 22.548: 99.3002% ( 9) 00:08:14.313 22.548 - 22.646: 99.4208% ( 10) 00:08:14.313 22.646 - 22.745: 99.4932% ( 6) 00:08:14.313 22.745 - 22.843: 99.5536% ( 5) 00:08:14.313 22.843 - 22.942: 99.6380% ( 7) 00:08:14.313 22.942 - 23.040: 99.6863% ( 4) 00:08:14.313 23.040 - 23.138: 99.7104% ( 2) 00:08:14.313 23.138 - 23.237: 99.7225% ( 1) 00:08:14.313 23.237 - 23.335: 99.7346% ( 1) 00:08:14.313 23.335 - 23.434: 99.7587% ( 2) 00:08:14.313 23.434 - 23.532: 99.7708% ( 1) 00:08:14.313 23.532 - 23.631: 99.7949% ( 2) 00:08:14.313 23.729 - 23.828: 99.8431% ( 4) 00:08:14.313 23.828 - 23.926: 99.8552% ( 1) 00:08:14.313 23.926 - 24.025: 99.8673% ( 1) 00:08:14.313 24.222 - 24.320: 99.8793% ( 1) 00:08:14.313 24.418 - 24.517: 99.8914% ( 1) 00:08:14.313 25.797 - 25.994: 99.9035% ( 1) 00:08:14.313 27.569 - 27.766: 99.9155% ( 1) 00:08:14.313 27.963 - 28.160: 99.9276% ( 1) 00:08:14.313 31.902 - 32.098: 99.9397% ( 1) 00:08:14.313 32.295 - 32.492: 99.9517% ( 1) 00:08:14.313 34.658 - 34.855: 99.9638% ( 1) 00:08:14.313 35.249 - 35.446: 99.9759% ( 1) 00:08:14.313 135.483 - 136.271: 99.9879% ( 1) 00:08:14.313 214.252 - 215.828: 100.0000% ( 1) 00:08:14.313 00:08:14.313 00:08:14.313 real 0m1.187s 00:08:14.313 user 0m1.050s 00:08:14.313 sys 0m0.088s 00:08:14.313 08:52:08 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.313 08:52:08 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:14.313 ************************************ 00:08:14.313 END TEST nvme_overhead 00:08:14.313 ************************************ 00:08:14.313 08:52:08 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:14.313 08:52:08 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:14.313 08:52:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.313 08:52:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.313 ************************************ 00:08:14.313 START TEST nvme_arbitration 00:08:14.313 ************************************ 00:08:14.313 08:52:08 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:17.606 Initializing NVMe Controllers 00:08:17.606 Attached to 0000:00:10.0 00:08:17.606 Attached to 0000:00:11.0 00:08:17.606 Attached to 0000:00:13.0 00:08:17.606 Attached to 0000:00:12.0 00:08:17.606 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:17.606 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:17.606 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:17.606 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:17.606 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:17.606 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:17.606 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:17.606 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:17.606 Initialization complete. Launching workers. 00:08:17.606 Starting thread on core 1 with urgent priority queue 00:08:17.606 Starting thread on core 2 with urgent priority queue 00:08:17.606 Starting thread on core 3 with urgent priority queue 00:08:17.606 Starting thread on core 0 with urgent priority queue 00:08:17.606 QEMU NVMe Ctrl (12340 ) core 0: 5824.00 IO/s 17.17 secs/100000 ios 00:08:17.606 QEMU NVMe Ctrl (12342 ) core 0: 5824.00 IO/s 17.17 secs/100000 ios 00:08:17.606 QEMU NVMe Ctrl (12341 ) core 1: 6357.33 IO/s 15.73 secs/100000 ios 00:08:17.606 QEMU NVMe Ctrl (12342 ) core 1: 6357.33 IO/s 15.73 secs/100000 ios 00:08:17.606 QEMU NVMe Ctrl (12343 ) core 2: 5461.33 IO/s 18.31 secs/100000 ios 00:08:17.606 QEMU NVMe Ctrl (12342 ) core 3: 6080.00 IO/s 16.45 secs/100000 ios 00:08:17.606 ======================================================== 00:08:17.606 00:08:17.606 00:08:17.606 real 0m3.209s 00:08:17.606 user 0m8.988s 00:08:17.606 sys 0m0.105s 00:08:17.606 ************************************ 00:08:17.606 END TEST nvme_arbitration 00:08:17.606 ************************************ 00:08:17.606 08:52:11 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.606 08:52:11 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:17.606 08:52:11 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:17.606 08:52:11 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:17.606 08:52:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.606 08:52:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:17.606 ************************************ 00:08:17.606 START TEST nvme_single_aen 00:08:17.606 ************************************ 00:08:17.606 08:52:11 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:17.606 Asynchronous Event Request test 00:08:17.606 Attached to 0000:00:10.0 00:08:17.606 Attached to 0000:00:11.0 00:08:17.606 Attached to 0000:00:13.0 00:08:17.606 Attached to 0000:00:12.0 00:08:17.606 Reset controller to setup AER completions for this process 00:08:17.606 Registering asynchronous event callbacks... 00:08:17.606 Getting orig temperature thresholds of all controllers 00:08:17.606 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:17.606 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:17.606 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:17.606 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:17.606 Setting all controllers temperature threshold low to trigger AER 00:08:17.606 Waiting for all controllers temperature threshold to be set lower 00:08:17.606 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:17.606 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:17.606 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:17.606 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:17.606 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:17.606 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:17.607 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:17.607 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:17.607 Waiting for all controllers to trigger AER and reset threshold 00:08:17.607 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:17.607 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:17.607 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:17.607 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:17.607 Cleaning up... 00:08:17.607 00:08:17.607 real 0m0.201s 00:08:17.607 user 0m0.066s 00:08:17.607 sys 0m0.088s 00:08:17.607 ************************************ 00:08:17.607 END TEST nvme_single_aen 00:08:17.607 ************************************ 00:08:17.607 08:52:11 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.607 08:52:11 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:17.607 08:52:11 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:17.607 08:52:11 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.607 08:52:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.607 08:52:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:17.607 ************************************ 00:08:17.607 START TEST nvme_doorbell_aers 00:08:17.607 ************************************ 00:08:17.607 08:52:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:17.607 08:52:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:17.607 08:52:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:17.607 08:52:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:17.607 08:52:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:17.607 08:52:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:17.607 08:52:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:17.607 08:52:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:17.607 08:52:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:17.607 08:52:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:17.607 08:52:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:17.607 08:52:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:17.607 08:52:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:17.607 08:52:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:17.867 [2024-11-28 08:52:11.849969] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:27.867 Executing: test_write_invalid_db 00:08:27.867 Waiting for AER completion... 00:08:27.867 Failure: test_write_invalid_db 00:08:27.867 00:08:27.867 Executing: test_invalid_db_write_overflow_sq 00:08:27.867 Waiting for AER completion... 00:08:27.867 Failure: test_invalid_db_write_overflow_sq 00:08:27.867 00:08:27.867 Executing: test_invalid_db_write_overflow_cq 00:08:27.867 Waiting for AER completion... 00:08:27.867 Failure: test_invalid_db_write_overflow_cq 00:08:27.867 00:08:27.867 08:52:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:27.867 08:52:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:27.867 [2024-11-28 08:52:21.880498] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:37.830 Executing: test_write_invalid_db 00:08:37.830 Waiting for AER completion... 00:08:37.830 Failure: test_write_invalid_db 00:08:37.830 00:08:37.830 Executing: test_invalid_db_write_overflow_sq 00:08:37.830 Waiting for AER completion... 00:08:37.830 Failure: test_invalid_db_write_overflow_sq 00:08:37.830 00:08:37.830 Executing: test_invalid_db_write_overflow_cq 00:08:37.830 Waiting for AER completion... 00:08:37.830 Failure: test_invalid_db_write_overflow_cq 00:08:37.831 00:08:37.831 08:52:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:37.831 08:52:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:37.831 [2024-11-28 08:52:31.918945] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:47.793 Executing: test_write_invalid_db 00:08:47.793 Waiting for AER completion... 00:08:47.793 Failure: test_write_invalid_db 00:08:47.793 00:08:47.793 Executing: test_invalid_db_write_overflow_sq 00:08:47.793 Waiting for AER completion... 00:08:47.793 Failure: test_invalid_db_write_overflow_sq 00:08:47.793 00:08:47.793 Executing: test_invalid_db_write_overflow_cq 00:08:47.793 Waiting for AER completion... 00:08:47.793 Failure: test_invalid_db_write_overflow_cq 00:08:47.793 00:08:47.793 08:52:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:47.793 08:52:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:48.051 [2024-11-28 08:52:41.958897] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:58.028 Executing: test_write_invalid_db 00:08:58.028 Waiting for AER completion... 00:08:58.028 Failure: test_write_invalid_db 00:08:58.028 00:08:58.028 Executing: test_invalid_db_write_overflow_sq 00:08:58.028 Waiting for AER completion... 00:08:58.028 Failure: test_invalid_db_write_overflow_sq 00:08:58.028 00:08:58.028 Executing: test_invalid_db_write_overflow_cq 00:08:58.028 Waiting for AER completion... 00:08:58.028 Failure: test_invalid_db_write_overflow_cq 00:08:58.028 00:08:58.028 ************************************ 00:08:58.028 END TEST nvme_doorbell_aers 00:08:58.028 ************************************ 00:08:58.028 00:08:58.028 real 0m40.186s 00:08:58.028 user 0m34.087s 00:08:58.028 sys 0m5.709s 00:08:58.028 08:52:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.028 08:52:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:58.028 08:52:51 nvme -- nvme/nvme.sh@97 -- # uname 00:08:58.028 08:52:51 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:58.028 08:52:51 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:58.028 08:52:51 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:58.028 08:52:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.028 08:52:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.028 ************************************ 00:08:58.028 START TEST nvme_multi_aen 00:08:58.028 ************************************ 00:08:58.028 08:52:51 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:58.028 [2024-11-28 08:52:51.998879] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:58.028 [2024-11-28 08:52:51.998935] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:58.028 [2024-11-28 08:52:51.998945] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:58.028 [2024-11-28 08:52:52.000517] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:58.028 [2024-11-28 08:52:52.000547] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:58.028 [2024-11-28 08:52:52.000555] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:58.028 [2024-11-28 08:52:52.001698] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:58.028 [2024-11-28 08:52:52.001725] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:58.028 [2024-11-28 08:52:52.001733] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:58.028 [2024-11-28 08:52:52.002701] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:58.028 [2024-11-28 08:52:52.002725] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:58.028 [2024-11-28 08:52:52.002733] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75571) is not found. Dropping the request. 00:08:58.028 Child process pid: 76091 00:08:58.287 [Child] Asynchronous Event Request test 00:08:58.287 [Child] Attached to 0000:00:10.0 00:08:58.287 [Child] Attached to 0000:00:11.0 00:08:58.287 [Child] Attached to 0000:00:13.0 00:08:58.287 [Child] Attached to 0000:00:12.0 00:08:58.287 [Child] Registering asynchronous event callbacks... 00:08:58.287 [Child] Getting orig temperature thresholds of all controllers 00:08:58.287 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:58.287 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:58.287 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:58.287 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:58.287 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:58.287 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:58.287 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:58.287 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:58.287 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:58.287 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.287 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.287 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.287 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.287 [Child] Cleaning up... 00:08:58.287 Asynchronous Event Request test 00:08:58.287 Attached to 0000:00:10.0 00:08:58.287 Attached to 0000:00:11.0 00:08:58.287 Attached to 0000:00:13.0 00:08:58.287 Attached to 0000:00:12.0 00:08:58.287 Reset controller to setup AER completions for this process 00:08:58.287 Registering asynchronous event callbacks... 00:08:58.287 Getting orig temperature thresholds of all controllers 00:08:58.287 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:58.287 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:58.287 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:58.287 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:58.287 Setting all controllers temperature threshold low to trigger AER 00:08:58.287 Waiting for all controllers temperature threshold to be set lower 00:08:58.287 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:58.287 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:58.287 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:58.287 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:58.287 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:58.287 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:58.287 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:58.287 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:58.287 Waiting for all controllers to trigger AER and reset threshold 00:08:58.287 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.287 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.287 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.287 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.287 Cleaning up... 00:08:58.287 ************************************ 00:08:58.287 END TEST nvme_multi_aen 00:08:58.287 ************************************ 00:08:58.287 00:08:58.287 real 0m0.372s 00:08:58.287 user 0m0.102s 00:08:58.287 sys 0m0.165s 00:08:58.287 08:52:52 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.287 08:52:52 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:58.287 08:52:52 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:58.287 08:52:52 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:58.287 08:52:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.287 08:52:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.287 ************************************ 00:08:58.287 START TEST nvme_startup 00:08:58.287 ************************************ 00:08:58.287 08:52:52 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:58.545 Initializing NVMe Controllers 00:08:58.545 Attached to 0000:00:10.0 00:08:58.545 Attached to 0000:00:11.0 00:08:58.545 Attached to 0000:00:13.0 00:08:58.545 Attached to 0000:00:12.0 00:08:58.545 Initialization complete. 00:08:58.545 Time used:119476.016 (us). 00:08:58.545 ************************************ 00:08:58.545 END TEST nvme_startup 00:08:58.545 ************************************ 00:08:58.545 00:08:58.545 real 0m0.175s 00:08:58.545 user 0m0.052s 00:08:58.545 sys 0m0.084s 00:08:58.545 08:52:52 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.545 08:52:52 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:58.545 08:52:52 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:58.545 08:52:52 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.545 08:52:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.545 08:52:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.545 ************************************ 00:08:58.545 START TEST nvme_multi_secondary 00:08:58.545 ************************************ 00:08:58.545 08:52:52 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:58.545 08:52:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=76142 00:08:58.545 08:52:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:58.545 08:52:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=76143 00:08:58.545 08:52:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:58.545 08:52:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:01.848 Initializing NVMe Controllers 00:09:01.848 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:01.848 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:01.848 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:01.848 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:01.848 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:01.848 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:01.848 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:01.848 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:01.848 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:01.848 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:01.848 Initialization complete. Launching workers. 00:09:01.848 ======================================================== 00:09:01.849 Latency(us) 00:09:01.849 Device Information : IOPS MiB/s Average min max 00:09:01.849 PCIE (0000:00:10.0) NSID 1 from core 2: 1507.99 5.89 10606.85 862.74 25435.88 00:09:01.849 PCIE (0000:00:11.0) NSID 1 from core 2: 1507.99 5.89 10611.66 890.26 24684.36 00:09:01.849 PCIE (0000:00:13.0) NSID 1 from core 2: 1507.99 5.89 10610.67 861.19 29183.02 00:09:01.849 PCIE (0000:00:12.0) NSID 1 from core 2: 1507.99 5.89 10612.39 882.17 27867.77 00:09:01.849 PCIE (0000:00:12.0) NSID 2 from core 2: 1507.99 5.89 10613.80 882.81 29440.40 00:09:01.849 PCIE (0000:00:12.0) NSID 3 from core 2: 1507.99 5.89 10613.59 885.11 28723.75 00:09:01.849 ======================================================== 00:09:01.849 Total : 9047.93 35.34 10611.49 861.19 29440.40 00:09:01.849 00:09:01.849 Initializing NVMe Controllers 00:09:01.849 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:01.849 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:01.849 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:01.849 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:01.849 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:01.849 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:01.849 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:01.849 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:01.849 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:01.849 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:01.849 Initialization complete. Launching workers. 00:09:01.849 ======================================================== 00:09:01.849 Latency(us) 00:09:01.849 Device Information : IOPS MiB/s Average min max 00:09:01.849 PCIE (0000:00:10.0) NSID 1 from core 1: 4266.06 16.66 3748.85 876.41 9213.80 00:09:01.849 PCIE (0000:00:11.0) NSID 1 from core 1: 4266.06 16.66 3749.94 922.08 8951.30 00:09:01.849 PCIE (0000:00:13.0) NSID 1 from core 1: 4266.06 16.66 3750.13 904.40 9216.15 00:09:01.849 PCIE (0000:00:12.0) NSID 1 from core 1: 4266.06 16.66 3750.05 917.41 10159.16 00:09:01.849 PCIE (0000:00:12.0) NSID 2 from core 1: 4266.06 16.66 3750.22 917.43 11530.48 00:09:01.849 PCIE (0000:00:12.0) NSID 3 from core 1: 4266.06 16.66 3750.09 907.74 10392.71 00:09:01.849 ======================================================== 00:09:01.849 Total : 25596.34 99.99 3749.88 876.41 11530.48 00:09:01.849 00:09:01.849 08:52:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 76142 00:09:03.761 Initializing NVMe Controllers 00:09:03.761 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:03.761 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:03.761 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:03.761 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:03.761 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:03.761 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:03.761 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:03.761 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:03.761 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:03.761 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:03.761 Initialization complete. Launching workers. 00:09:03.761 ======================================================== 00:09:03.761 Latency(us) 00:09:03.761 Device Information : IOPS MiB/s Average min max 00:09:03.761 PCIE (0000:00:10.0) NSID 1 from core 0: 6553.73 25.60 2439.91 719.83 11322.47 00:09:03.761 PCIE (0000:00:11.0) NSID 1 from core 0: 6559.73 25.62 2438.59 583.28 10953.87 00:09:03.761 PCIE (0000:00:13.0) NSID 1 from core 0: 6561.33 25.63 2437.90 604.04 10696.54 00:09:03.761 PCIE (0000:00:12.0) NSID 1 from core 0: 6562.13 25.63 2437.51 637.41 10595.18 00:09:03.761 PCIE (0000:00:12.0) NSID 2 from core 0: 6562.13 25.63 2437.42 595.63 12061.57 00:09:03.761 PCIE (0000:00:12.0) NSID 3 from core 0: 6565.33 25.65 2436.13 517.01 11686.19 00:09:03.761 ======================================================== 00:09:03.761 Total : 39364.36 153.77 2437.91 517.01 12061.57 00:09:03.761 00:09:04.023 08:52:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 76143 00:09:04.023 08:52:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=76213 00:09:04.023 08:52:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:04.023 08:52:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=76214 00:09:04.023 08:52:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:04.023 08:52:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:07.324 Initializing NVMe Controllers 00:09:07.324 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:07.324 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:07.324 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:07.324 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:07.324 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:07.324 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:07.324 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:07.324 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:07.324 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:07.324 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:07.324 Initialization complete. Launching workers. 00:09:07.324 ======================================================== 00:09:07.324 Latency(us) 00:09:07.324 Device Information : IOPS MiB/s Average min max 00:09:07.324 PCIE (0000:00:10.0) NSID 1 from core 1: 4730.42 18.48 3380.76 805.25 8408.64 00:09:07.324 PCIE (0000:00:11.0) NSID 1 from core 1: 4730.42 18.48 3381.79 836.69 8010.68 00:09:07.324 PCIE (0000:00:13.0) NSID 1 from core 1: 4730.42 18.48 3382.30 824.67 7907.70 00:09:07.324 PCIE (0000:00:12.0) NSID 1 from core 1: 4730.42 18.48 3382.24 830.94 8431.21 00:09:07.324 PCIE (0000:00:12.0) NSID 2 from core 1: 4730.42 18.48 3382.18 810.57 8087.73 00:09:07.324 PCIE (0000:00:12.0) NSID 3 from core 1: 4735.75 18.50 3378.67 820.43 8392.99 00:09:07.324 ======================================================== 00:09:07.324 Total : 28387.83 110.89 3381.32 805.25 8431.21 00:09:07.324 00:09:07.324 Initializing NVMe Controllers 00:09:07.324 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:07.324 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:07.324 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:07.324 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:07.324 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:07.324 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:07.324 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:07.324 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:07.324 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:07.324 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:07.324 Initialization complete. Launching workers. 00:09:07.324 ======================================================== 00:09:07.325 Latency(us) 00:09:07.325 Device Information : IOPS MiB/s Average min max 00:09:07.325 PCIE (0000:00:10.0) NSID 1 from core 0: 4469.26 17.46 3578.31 892.39 10038.22 00:09:07.325 PCIE (0000:00:11.0) NSID 1 from core 0: 4469.26 17.46 3579.39 915.41 9338.04 00:09:07.325 PCIE (0000:00:13.0) NSID 1 from core 0: 4469.26 17.46 3579.25 937.28 8838.22 00:09:07.325 PCIE (0000:00:12.0) NSID 1 from core 0: 4469.26 17.46 3579.11 925.42 8630.45 00:09:07.325 PCIE (0000:00:12.0) NSID 2 from core 0: 4469.26 17.46 3578.97 929.47 8477.54 00:09:07.325 PCIE (0000:00:12.0) NSID 3 from core 0: 4469.26 17.46 3578.84 929.10 10345.33 00:09:07.325 ======================================================== 00:09:07.325 Total : 26815.55 104.75 3578.98 892.39 10345.33 00:09:07.325 00:09:09.232 Initializing NVMe Controllers 00:09:09.232 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:09.232 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:09.232 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:09.232 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:09.232 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:09.232 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:09.232 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:09.232 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:09.232 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:09.232 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:09.232 Initialization complete. Launching workers. 00:09:09.232 ======================================================== 00:09:09.232 Latency(us) 00:09:09.232 Device Information : IOPS MiB/s Average min max 00:09:09.232 PCIE (0000:00:10.0) NSID 1 from core 2: 2455.83 9.59 6513.76 836.55 39625.09 00:09:09.232 PCIE (0000:00:11.0) NSID 1 from core 2: 2455.83 9.59 6514.84 856.41 35086.30 00:09:09.232 PCIE (0000:00:13.0) NSID 1 from core 2: 2455.83 9.59 6514.74 852.52 35525.12 00:09:09.232 PCIE (0000:00:12.0) NSID 1 from core 2: 2455.83 9.59 6514.30 840.46 35289.75 00:09:09.232 PCIE (0000:00:12.0) NSID 2 from core 2: 2455.83 9.59 6513.87 839.34 35906.27 00:09:09.232 PCIE (0000:00:12.0) NSID 3 from core 2: 2459.03 9.61 6505.94 713.13 36346.89 00:09:09.232 ======================================================== 00:09:09.232 Total : 14738.20 57.57 6512.91 713.13 39625.09 00:09:09.232 00:09:09.232 ************************************ 00:09:09.232 END TEST nvme_multi_secondary 00:09:09.232 ************************************ 00:09:09.232 08:53:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 76213 00:09:09.232 08:53:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 76214 00:09:09.232 00:09:09.232 real 0m10.670s 00:09:09.232 user 0m18.153s 00:09:09.232 sys 0m0.669s 00:09:09.232 08:53:03 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.232 08:53:03 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:09.232 08:53:03 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:09.232 08:53:03 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:09.232 08:53:03 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/75179 ]] 00:09:09.232 08:53:03 nvme -- common/autotest_common.sh@1090 -- # kill 75179 00:09:09.232 08:53:03 nvme -- common/autotest_common.sh@1091 -- # wait 75179 00:09:09.232 [2024-11-28 08:53:03.179837] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.232 [2024-11-28 08:53:03.179933] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.232 [2024-11-28 08:53:03.179959] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.232 [2024-11-28 08:53:03.179984] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.232 [2024-11-28 08:53:03.180695] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.232 [2024-11-28 08:53:03.180758] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.232 [2024-11-28 08:53:03.180779] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.232 [2024-11-28 08:53:03.180841] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.233 [2024-11-28 08:53:03.181441] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.233 [2024-11-28 08:53:03.181496] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.233 [2024-11-28 08:53:03.181515] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.233 [2024-11-28 08:53:03.181539] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.233 [2024-11-28 08:53:03.182188] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.233 [2024-11-28 08:53:03.182247] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.233 [2024-11-28 08:53:03.182266] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.233 [2024-11-28 08:53:03.182288] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76090) is not found. Dropping the request. 00:09:09.233 08:53:03 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:09.233 08:53:03 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:09.233 08:53:03 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:09.233 08:53:03 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:09.233 08:53:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.233 08:53:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.233 ************************************ 00:09:09.233 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:09.233 ************************************ 00:09:09.233 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:09.233 * Looking for test storage... 00:09:09.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:09.233 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:09.233 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:09.233 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:09.491 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:09.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.492 --rc genhtml_branch_coverage=1 00:09:09.492 --rc genhtml_function_coverage=1 00:09:09.492 --rc genhtml_legend=1 00:09:09.492 --rc geninfo_all_blocks=1 00:09:09.492 --rc geninfo_unexecuted_blocks=1 00:09:09.492 00:09:09.492 ' 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:09.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.492 --rc genhtml_branch_coverage=1 00:09:09.492 --rc genhtml_function_coverage=1 00:09:09.492 --rc genhtml_legend=1 00:09:09.492 --rc geninfo_all_blocks=1 00:09:09.492 --rc geninfo_unexecuted_blocks=1 00:09:09.492 00:09:09.492 ' 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:09.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.492 --rc genhtml_branch_coverage=1 00:09:09.492 --rc genhtml_function_coverage=1 00:09:09.492 --rc genhtml_legend=1 00:09:09.492 --rc geninfo_all_blocks=1 00:09:09.492 --rc geninfo_unexecuted_blocks=1 00:09:09.492 00:09:09.492 ' 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:09.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.492 --rc genhtml_branch_coverage=1 00:09:09.492 --rc genhtml_function_coverage=1 00:09:09.492 --rc genhtml_legend=1 00:09:09.492 --rc geninfo_all_blocks=1 00:09:09.492 --rc geninfo_unexecuted_blocks=1 00:09:09.492 00:09:09.492 ' 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76379 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76379 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 76379 ']' 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.492 08:53:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:09.492 [2024-11-28 08:53:03.541909] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:09.492 [2024-11-28 08:53:03.542030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76379 ] 00:09:09.750 [2024-11-28 08:53:03.702032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.750 [2024-11-28 08:53:03.753998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.750 [2024-11-28 08:53:03.754538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.750 [2024-11-28 08:53:03.754695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.750 [2024-11-28 08:53:03.754782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.316 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.316 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:10.316 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:10.316 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.316 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:10.574 nvme0n1 00:09:10.574 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.574 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:10.574 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_uEMTT.txt 00:09:10.574 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:10.574 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.574 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:10.574 true 00:09:10.574 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.574 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:10.574 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732783984 00:09:10.574 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76402 00:09:10.574 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:10.575 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:10.575 08:53:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:12.475 [2024-11-28 08:53:06.479738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:12.475 [2024-11-28 08:53:06.480082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:12.475 [2024-11-28 08:53:06.480311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:12.475 [2024-11-28 08:53:06.480410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:12.475 [2024-11-28 08:53:06.482484] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.475 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76402 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76402 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76402 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_uEMTT.txt 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_uEMTT.txt 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76379 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 76379 ']' 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 76379 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76379 00:09:12.475 killing process with pid 76379 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.475 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.476 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76379' 00:09:12.476 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 76379 00:09:12.476 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 76379 00:09:13.044 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:13.044 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:13.044 00:09:13.044 real 0m3.648s 00:09:13.044 user 0m12.851s 00:09:13.044 sys 0m0.512s 00:09:13.044 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.044 08:53:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:13.044 ************************************ 00:09:13.044 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:13.044 ************************************ 00:09:13.044 08:53:06 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:13.044 08:53:06 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:13.044 08:53:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:13.044 08:53:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.044 08:53:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:13.044 ************************************ 00:09:13.044 START TEST nvme_fio 00:09:13.044 ************************************ 00:09:13.044 08:53:06 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:13.044 08:53:06 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:13.044 08:53:06 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:13.044 08:53:06 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:13.044 08:53:06 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:13.044 08:53:06 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:13.044 08:53:06 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:13.044 08:53:06 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:13.044 08:53:06 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:13.044 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:13.044 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:13.044 08:53:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:13.044 08:53:07 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:13.044 08:53:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:13.044 08:53:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:13.044 08:53:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:13.302 08:53:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:13.302 08:53:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:13.561 08:53:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:13.561 08:53:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:13.561 08:53:07 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:13.561 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:13.561 fio-3.35 00:09:13.561 Starting 1 thread 00:09:17.746 00:09:17.746 test: (groupid=0, jobs=1): err= 0: pid=76525: Thu Nov 28 08:53:11 2024 00:09:17.746 read: IOPS=15.9k, BW=62.0MiB/s (65.0MB/s)(125MiB/2020msec) 00:09:17.746 slat (nsec): min=3323, max=82400, avg=5384.89, stdev=2980.45 00:09:17.746 clat (usec): min=797, max=23243, avg=3208.19, stdev=1246.20 00:09:17.746 lat (usec): min=800, max=23248, avg=3213.57, stdev=1247.07 00:09:17.746 clat percentiles (usec): 00:09:17.746 | 1.00th=[ 1532], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2540], 00:09:17.746 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2868], 00:09:17.746 | 70.00th=[ 3130], 80.00th=[ 3752], 90.00th=[ 4883], 95.00th=[ 5800], 00:09:17.746 | 99.00th=[ 6783], 99.50th=[ 7177], 99.90th=[19792], 99.95th=[20841], 00:09:17.746 | 99.99th=[22676] 00:09:17.746 bw ( KiB/s): min=34104, max=84544, per=100.00%, avg=64014.00, stdev=23214.42, samples=4 00:09:17.746 iops : min= 8526, max=21136, avg=16003.50, stdev=5803.61, samples=4 00:09:17.746 write: IOPS=15.9k, BW=62.1MiB/s (65.1MB/s)(125MiB/2020msec); 0 zone resets 00:09:17.746 slat (nsec): min=3451, max=98941, avg=5780.30, stdev=2967.15 00:09:17.746 clat (usec): min=889, max=48882, avg=4826.30, stdev=5899.45 00:09:17.746 lat (usec): min=892, max=48887, avg=4832.08, stdev=5899.76 00:09:17.746 clat percentiles (usec): 00:09:17.746 | 1.00th=[ 1762], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2573], 00:09:17.746 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2933], 00:09:17.746 | 70.00th=[ 3326], 80.00th=[ 4293], 90.00th=[ 6325], 95.00th=[22414], 00:09:17.746 | 99.00th=[28967], 99.50th=[30278], 99.90th=[34866], 99.95th=[42730], 00:09:17.746 | 99.99th=[48497] 00:09:17.746 bw ( KiB/s): min=34568, max=84664, per=100.00%, avg=64024.00, stdev=23076.53, samples=4 00:09:17.746 iops : min= 8642, max=21166, avg=16006.00, stdev=5769.13, samples=4 00:09:17.746 lat (usec) : 1000=0.02% 00:09:17.746 lat (msec) : 2=2.32%, 4=77.64%, 10=16.01%, 20=0.72%, 50=3.30% 00:09:17.746 cpu : usr=99.11%, sys=0.05%, ctx=1, majf=0, minf=627 00:09:17.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:17.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:17.746 issued rwts: total=32043,32103,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.746 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:17.746 00:09:17.746 Run status group 0 (all jobs): 00:09:17.746 READ: bw=62.0MiB/s (65.0MB/s), 62.0MiB/s-62.0MiB/s (65.0MB/s-65.0MB/s), io=125MiB (131MB), run=2020-2020msec 00:09:17.746 WRITE: bw=62.1MiB/s (65.1MB/s), 62.1MiB/s-62.1MiB/s (65.1MB/s-65.1MB/s), io=125MiB (131MB), run=2020-2020msec 00:09:17.746 ----------------------------------------------------- 00:09:17.746 Suppressions used: 00:09:17.746 count bytes template 00:09:17.746 1 32 /usr/src/fio/parse.c 00:09:17.746 1 8 libtcmalloc_minimal.so 00:09:17.746 ----------------------------------------------------- 00:09:17.746 00:09:17.746 08:53:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:17.746 08:53:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:17.746 08:53:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:17.746 08:53:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:18.005 08:53:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:18.005 08:53:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:18.319 08:53:12 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:18.319 08:53:12 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:18.319 08:53:12 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:18.319 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:18.319 fio-3.35 00:09:18.319 Starting 1 thread 00:09:24.890 00:09:24.890 test: (groupid=0, jobs=1): err= 0: pid=76580: Thu Nov 28 08:53:18 2024 00:09:24.890 read: IOPS=20.8k, BW=81.1MiB/s (85.0MB/s)(162MiB/2001msec) 00:09:24.890 slat (nsec): min=4212, max=74830, avg=5068.49, stdev=2271.76 00:09:24.890 clat (usec): min=305, max=8905, avg=3078.16, stdev=840.36 00:09:24.890 lat (usec): min=310, max=8921, avg=3083.23, stdev=841.60 00:09:24.890 clat percentiles (usec): 00:09:24.890 | 1.00th=[ 2343], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2638], 00:09:24.890 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2835], 00:09:24.890 | 70.00th=[ 2933], 80.00th=[ 3195], 90.00th=[ 4178], 95.00th=[ 5080], 00:09:24.890 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7767], 99.95th=[ 8160], 00:09:24.890 | 99.99th=[ 8455] 00:09:24.890 bw ( KiB/s): min=79208, max=84992, per=98.50%, avg=81773.33, stdev=2946.83, samples=3 00:09:24.890 iops : min=19802, max=21248, avg=20443.33, stdev=736.71, samples=3 00:09:24.890 write: IOPS=20.7k, BW=80.8MiB/s (84.7MB/s)(162MiB/2001msec); 0 zone resets 00:09:24.890 slat (nsec): min=4313, max=68081, avg=5496.91, stdev=2282.34 00:09:24.890 clat (usec): min=208, max=8792, avg=3080.24, stdev=841.36 00:09:24.890 lat (usec): min=213, max=8806, avg=3085.73, stdev=842.59 00:09:24.890 clat percentiles (usec): 00:09:24.890 | 1.00th=[ 2376], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2638], 00:09:24.890 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2835], 00:09:24.890 | 70.00th=[ 2933], 80.00th=[ 3195], 90.00th=[ 4178], 95.00th=[ 5080], 00:09:24.890 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7439], 99.95th=[ 8094], 00:09:24.890 | 99.99th=[ 8455] 00:09:24.890 bw ( KiB/s): min=79504, max=85288, per=99.07%, avg=81920.00, stdev=3007.22, samples=3 00:09:24.890 iops : min=19876, max=21322, avg=20480.00, stdev=751.81, samples=3 00:09:24.890 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:24.890 lat (msec) : 2=0.14%, 4=88.53%, 10=11.29% 00:09:24.890 cpu : usr=99.10%, sys=0.10%, ctx=6, majf=0, minf=627 00:09:24.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:24.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.890 issued rwts: total=41529,41365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.890 00:09:24.890 Run status group 0 (all jobs): 00:09:24.890 READ: bw=81.1MiB/s (85.0MB/s), 81.1MiB/s-81.1MiB/s (85.0MB/s-85.0MB/s), io=162MiB (170MB), run=2001-2001msec 00:09:24.890 WRITE: bw=80.8MiB/s (84.7MB/s), 80.8MiB/s-80.8MiB/s (84.7MB/s-84.7MB/s), io=162MiB (169MB), run=2001-2001msec 00:09:24.890 ----------------------------------------------------- 00:09:24.890 Suppressions used: 00:09:24.890 count bytes template 00:09:24.890 1 32 /usr/src/fio/parse.c 00:09:24.890 1 8 libtcmalloc_minimal.so 00:09:24.890 ----------------------------------------------------- 00:09:24.890 00:09:24.890 08:53:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:24.890 08:53:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:24.890 08:53:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:24.890 08:53:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:24.890 08:53:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:24.890 08:53:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:24.890 08:53:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:24.890 08:53:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:24.890 08:53:18 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:24.890 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:24.890 fio-3.35 00:09:24.890 Starting 1 thread 00:09:31.447 00:09:31.447 test: (groupid=0, jobs=1): err= 0: pid=76636: Thu Nov 28 08:53:24 2024 00:09:31.447 read: IOPS=20.7k, BW=81.0MiB/s (84.9MB/s)(162MiB/2001msec) 00:09:31.447 slat (nsec): min=3268, max=73803, avg=5121.94, stdev=2576.86 00:09:31.447 clat (usec): min=568, max=10322, avg=3077.55, stdev=920.28 00:09:31.447 lat (usec): min=580, max=10360, avg=3082.67, stdev=921.77 00:09:31.447 clat percentiles (usec): 00:09:31.447 | 1.00th=[ 2057], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2606], 00:09:31.447 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2802], 00:09:31.447 | 70.00th=[ 2900], 80.00th=[ 3228], 90.00th=[ 4293], 95.00th=[ 5276], 00:09:31.447 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7635], 99.95th=[ 7898], 00:09:31.447 | 99.99th=[10028] 00:09:31.447 bw ( KiB/s): min=71240, max=90224, per=98.96%, avg=82066.67, stdev=9769.45, samples=3 00:09:31.447 iops : min=17810, max=22556, avg=20516.67, stdev=2442.36, samples=3 00:09:31.447 write: IOPS=20.7k, BW=80.7MiB/s (84.6MB/s)(161MiB/2001msec); 0 zone resets 00:09:31.447 slat (nsec): min=3383, max=55963, avg=5563.93, stdev=2504.68 00:09:31.447 clat (usec): min=741, max=10135, avg=3090.20, stdev=931.38 00:09:31.447 lat (usec): min=753, max=10145, avg=3095.77, stdev=932.85 00:09:31.447 clat percentiles (usec): 00:09:31.447 | 1.00th=[ 2089], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2638], 00:09:31.447 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2802], 00:09:31.447 | 70.00th=[ 2900], 80.00th=[ 3261], 90.00th=[ 4359], 95.00th=[ 5342], 00:09:31.447 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 7635], 99.95th=[ 8094], 00:09:31.447 | 99.99th=[ 9634] 00:09:31.447 bw ( KiB/s): min=71280, max=90272, per=99.42%, avg=82138.67, stdev=9784.92, samples=3 00:09:31.447 iops : min=17820, max=22568, avg=20534.67, stdev=2446.23, samples=3 00:09:31.447 lat (usec) : 750=0.01%, 1000=0.03% 00:09:31.447 lat (msec) : 2=0.79%, 4=87.13%, 10=12.04%, 20=0.01% 00:09:31.447 cpu : usr=99.20%, sys=0.00%, ctx=5, majf=0, minf=627 00:09:31.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:31.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:31.447 issued rwts: total=41483,41331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:31.447 00:09:31.447 Run status group 0 (all jobs): 00:09:31.447 READ: bw=81.0MiB/s (84.9MB/s), 81.0MiB/s-81.0MiB/s (84.9MB/s-84.9MB/s), io=162MiB (170MB), run=2001-2001msec 00:09:31.447 WRITE: bw=80.7MiB/s (84.6MB/s), 80.7MiB/s-80.7MiB/s (84.6MB/s-84.6MB/s), io=161MiB (169MB), run=2001-2001msec 00:09:31.447 ----------------------------------------------------- 00:09:31.447 Suppressions used: 00:09:31.447 count bytes template 00:09:31.447 1 32 /usr/src/fio/parse.c 00:09:31.447 1 8 libtcmalloc_minimal.so 00:09:31.447 ----------------------------------------------------- 00:09:31.447 00:09:31.447 08:53:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:31.447 08:53:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:31.447 08:53:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:31.447 08:53:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:31.447 08:53:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:31.447 08:53:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:31.447 08:53:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:31.447 08:53:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:31.447 08:53:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:31.705 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:31.705 fio-3.35 00:09:31.705 Starting 1 thread 00:09:36.970 00:09:36.970 test: (groupid=0, jobs=1): err= 0: pid=76691: Thu Nov 28 08:53:30 2024 00:09:36.970 read: IOPS=19.5k, BW=76.4MiB/s (80.1MB/s)(153MiB/2001msec) 00:09:36.970 slat (usec): min=4, max=543, avg= 5.35, stdev= 3.89 00:09:36.970 clat (usec): min=891, max=11009, avg=3270.27, stdev=1076.70 00:09:36.970 lat (usec): min=895, max=11014, avg=3275.62, stdev=1078.01 00:09:36.970 clat percentiles (usec): 00:09:36.970 | 1.00th=[ 2245], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:09:36.970 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2966], 00:09:36.970 | 70.00th=[ 3261], 80.00th=[ 3916], 90.00th=[ 4817], 95.00th=[ 5735], 00:09:36.970 | 99.00th=[ 6783], 99.50th=[ 7177], 99.90th=[ 9372], 99.95th=[10028], 00:09:36.970 | 99.99th=[10290] 00:09:36.970 bw ( KiB/s): min=71848, max=80344, per=98.52%, avg=77032.00, stdev=4546.84, samples=3 00:09:36.970 iops : min=17962, max=20086, avg=19258.00, stdev=1136.71, samples=3 00:09:36.970 write: IOPS=19.5k, BW=76.2MiB/s (79.9MB/s)(153MiB/2001msec); 0 zone resets 00:09:36.970 slat (nsec): min=4287, max=86748, avg=5779.82, stdev=2777.66 00:09:36.970 clat (usec): min=915, max=10395, avg=3261.77, stdev=1076.78 00:09:36.970 lat (usec): min=919, max=10411, avg=3267.55, stdev=1078.09 00:09:36.970 clat percentiles (usec): 00:09:36.970 | 1.00th=[ 2245], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:09:36.970 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2966], 00:09:36.970 | 70.00th=[ 3261], 80.00th=[ 3884], 90.00th=[ 4817], 95.00th=[ 5735], 00:09:36.970 | 99.00th=[ 6783], 99.50th=[ 7242], 99.90th=[ 9634], 99.95th=[10159], 00:09:36.970 | 99.99th=[10290] 00:09:36.970 bw ( KiB/s): min=71784, max=80544, per=98.75%, avg=77096.00, stdev=4668.01, samples=3 00:09:36.970 iops : min=17946, max=20136, avg=19274.00, stdev=1167.00, samples=3 00:09:36.970 lat (usec) : 1000=0.02% 00:09:36.970 lat (msec) : 2=0.45%, 4=80.74%, 10=18.73%, 20=0.07% 00:09:36.970 cpu : usr=98.85%, sys=0.20%, ctx=10, majf=0, minf=624 00:09:36.970 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:36.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.970 issued rwts: total=39116,39056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.970 00:09:36.970 Run status group 0 (all jobs): 00:09:36.970 READ: bw=76.4MiB/s (80.1MB/s), 76.4MiB/s-76.4MiB/s (80.1MB/s-80.1MB/s), io=153MiB (160MB), run=2001-2001msec 00:09:36.970 WRITE: bw=76.2MiB/s (79.9MB/s), 76.2MiB/s-76.2MiB/s (79.9MB/s-79.9MB/s), io=153MiB (160MB), run=2001-2001msec 00:09:36.970 ----------------------------------------------------- 00:09:36.970 Suppressions used: 00:09:36.970 count bytes template 00:09:36.970 1 32 /usr/src/fio/parse.c 00:09:36.970 1 8 libtcmalloc_minimal.so 00:09:36.970 ----------------------------------------------------- 00:09:36.970 00:09:36.970 08:53:30 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:36.970 08:53:30 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:36.970 00:09:36.970 real 0m23.832s 00:09:36.970 user 0m18.692s 00:09:36.970 sys 0m6.572s 00:09:36.970 ************************************ 00:09:36.970 END TEST nvme_fio 00:09:36.970 ************************************ 00:09:36.970 08:53:30 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.970 08:53:30 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:36.970 00:09:36.970 real 1m31.193s 00:09:36.970 user 3m33.355s 00:09:36.970 sys 0m16.835s 00:09:36.970 ************************************ 00:09:36.970 08:53:30 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.970 08:53:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:36.970 END TEST nvme 00:09:36.970 ************************************ 00:09:36.970 08:53:30 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:36.970 08:53:30 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:36.970 08:53:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:36.970 08:53:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.970 08:53:30 -- common/autotest_common.sh@10 -- # set +x 00:09:36.970 ************************************ 00:09:36.970 START TEST nvme_scc 00:09:36.970 ************************************ 00:09:36.970 08:53:30 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:36.970 * Looking for test storage... 00:09:36.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:36.970 08:53:30 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:36.970 08:53:30 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:36.970 08:53:30 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:36.970 08:53:31 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:36.970 08:53:31 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.970 08:53:31 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:36.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.970 --rc genhtml_branch_coverage=1 00:09:36.970 --rc genhtml_function_coverage=1 00:09:36.970 --rc genhtml_legend=1 00:09:36.970 --rc geninfo_all_blocks=1 00:09:36.970 --rc geninfo_unexecuted_blocks=1 00:09:36.970 00:09:36.970 ' 00:09:36.970 08:53:31 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:36.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.970 --rc genhtml_branch_coverage=1 00:09:36.970 --rc genhtml_function_coverage=1 00:09:36.970 --rc genhtml_legend=1 00:09:36.970 --rc geninfo_all_blocks=1 00:09:36.970 --rc geninfo_unexecuted_blocks=1 00:09:36.970 00:09:36.970 ' 00:09:36.970 08:53:31 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:36.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.970 --rc genhtml_branch_coverage=1 00:09:36.970 --rc genhtml_function_coverage=1 00:09:36.970 --rc genhtml_legend=1 00:09:36.970 --rc geninfo_all_blocks=1 00:09:36.970 --rc geninfo_unexecuted_blocks=1 00:09:36.970 00:09:36.970 ' 00:09:36.970 08:53:31 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:36.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.970 --rc genhtml_branch_coverage=1 00:09:36.970 --rc genhtml_function_coverage=1 00:09:36.970 --rc genhtml_legend=1 00:09:36.970 --rc geninfo_all_blocks=1 00:09:36.970 --rc geninfo_unexecuted_blocks=1 00:09:36.970 00:09:36.970 ' 00:09:36.970 08:53:31 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:36.970 08:53:31 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:36.970 08:53:31 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:36.970 08:53:31 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:36.970 08:53:31 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.970 08:53:31 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.970 08:53:31 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.970 08:53:31 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.971 08:53:31 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.971 08:53:31 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:36.971 08:53:31 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.971 08:53:31 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:36.971 08:53:31 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:36.971 08:53:31 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:36.971 08:53:31 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:36.971 08:53:31 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:36.971 08:53:31 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:36.971 08:53:31 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:36.971 08:53:31 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:36.971 08:53:31 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:36.971 08:53:31 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.971 08:53:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:36.971 08:53:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:36.971 08:53:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:36.971 08:53:31 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:37.537 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.537 Waiting for block devices as requested 00:09:37.537 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.537 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.800 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.800 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.078 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:43.078 08:53:36 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:43.078 08:53:36 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:43.078 08:53:36 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:43.078 08:53:36 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.078 08:53:36 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.078 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:43.079 08:53:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:43.080 08:53:36 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:43.080 08:53:36 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:43.080 08:53:36 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.080 08:53:36 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.080 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:43.081 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:43.082 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:43.083 08:53:37 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:43.083 08:53:37 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:43.083 08:53:37 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.083 08:53:37 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:43.083 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.084 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:43.085 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:43.086 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:43.087 08:53:37 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:43.087 08:53:37 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:43.087 08:53:37 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.087 08:53:37 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.087 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.088 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.347 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:43.348 08:53:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:43.348 08:53:37 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:43.349 08:53:37 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:43.349 08:53:37 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:43.349 08:53:37 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:43.349 08:53:37 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:43.349 08:53:37 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:43.349 08:53:37 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:43.349 08:53:37 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:43.349 08:53:37 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:43.349 08:53:37 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:43.349 08:53:37 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:43.349 08:53:37 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:43.349 08:53:37 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:43.349 08:53:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:43.349 08:53:37 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:43.349 08:53:37 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:43.607 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.174 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.174 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.174 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.174 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.460 08:53:38 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:44.460 08:53:38 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:44.460 08:53:38 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.460 08:53:38 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:44.460 ************************************ 00:09:44.460 START TEST nvme_simple_copy 00:09:44.460 ************************************ 00:09:44.460 08:53:38 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:44.719 Initializing NVMe Controllers 00:09:44.719 Attaching to 0000:00:10.0 00:09:44.719 Controller supports SCC. Attached to 0000:00:10.0 00:09:44.719 Namespace ID: 1 size: 6GB 00:09:44.719 Initialization complete. 00:09:44.719 00:09:44.719 Controller QEMU NVMe Ctrl (12340 ) 00:09:44.719 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:44.719 Namespace Block Size:4096 00:09:44.719 Writing LBAs 0 to 63 with Random Data 00:09:44.719 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:44.719 LBAs matching Written Data: 64 00:09:44.719 00:09:44.719 real 0m0.249s 00:09:44.719 user 0m0.087s 00:09:44.719 sys 0m0.061s 00:09:44.719 08:53:38 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.719 ************************************ 00:09:44.719 END TEST nvme_simple_copy 00:09:44.719 ************************************ 00:09:44.719 08:53:38 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:44.719 ************************************ 00:09:44.719 END TEST nvme_scc 00:09:44.719 ************************************ 00:09:44.719 00:09:44.719 real 0m7.725s 00:09:44.719 user 0m1.032s 00:09:44.719 sys 0m1.357s 00:09:44.719 08:53:38 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.719 08:53:38 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:44.719 08:53:38 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:44.719 08:53:38 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:44.719 08:53:38 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:44.719 08:53:38 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:44.719 08:53:38 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:44.719 08:53:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:44.719 08:53:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.719 08:53:38 -- common/autotest_common.sh@10 -- # set +x 00:09:44.719 ************************************ 00:09:44.719 START TEST nvme_fdp 00:09:44.719 ************************************ 00:09:44.719 08:53:38 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:44.719 * Looking for test storage... 00:09:44.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:44.719 08:53:38 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:44.719 08:53:38 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:44.719 08:53:38 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:44.719 08:53:38 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:44.719 08:53:38 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.720 08:53:38 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.720 08:53:38 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.720 08:53:38 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:44.720 08:53:38 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.720 08:53:38 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:44.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.720 --rc genhtml_branch_coverage=1 00:09:44.720 --rc genhtml_function_coverage=1 00:09:44.720 --rc genhtml_legend=1 00:09:44.720 --rc geninfo_all_blocks=1 00:09:44.720 --rc geninfo_unexecuted_blocks=1 00:09:44.720 00:09:44.720 ' 00:09:44.720 08:53:38 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:44.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.720 --rc genhtml_branch_coverage=1 00:09:44.720 --rc genhtml_function_coverage=1 00:09:44.720 --rc genhtml_legend=1 00:09:44.720 --rc geninfo_all_blocks=1 00:09:44.720 --rc geninfo_unexecuted_blocks=1 00:09:44.720 00:09:44.720 ' 00:09:44.720 08:53:38 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:44.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.720 --rc genhtml_branch_coverage=1 00:09:44.720 --rc genhtml_function_coverage=1 00:09:44.720 --rc genhtml_legend=1 00:09:44.720 --rc geninfo_all_blocks=1 00:09:44.720 --rc geninfo_unexecuted_blocks=1 00:09:44.720 00:09:44.720 ' 00:09:44.720 08:53:38 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:44.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.720 --rc genhtml_branch_coverage=1 00:09:44.720 --rc genhtml_function_coverage=1 00:09:44.720 --rc genhtml_legend=1 00:09:44.720 --rc geninfo_all_blocks=1 00:09:44.720 --rc geninfo_unexecuted_blocks=1 00:09:44.720 00:09:44.720 ' 00:09:44.720 08:53:38 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:44.978 08:53:38 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:44.978 08:53:38 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:44.978 08:53:38 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:44.978 08:53:38 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.978 08:53:38 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.978 08:53:38 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.978 08:53:38 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.978 08:53:38 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.978 08:53:38 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.978 08:53:38 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.978 08:53:38 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.978 08:53:38 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:44.978 08:53:38 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.978 08:53:38 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:44.978 08:53:38 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:44.978 08:53:38 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:44.978 08:53:38 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:44.978 08:53:38 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:44.978 08:53:38 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:44.978 08:53:38 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:44.978 08:53:38 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:44.979 08:53:38 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:44.979 08:53:38 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:44.979 08:53:38 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:45.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:45.237 Waiting for block devices as requested 00:09:45.237 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.494 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.494 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.494 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.767 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:50.767 08:53:44 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:50.767 08:53:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:50.767 08:53:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:50.767 08:53:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.767 08:53:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:50.767 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.768 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.769 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.770 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.771 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:50.772 08:53:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:50.772 08:53:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:50.772 08:53:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.772 08:53:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.772 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:50.773 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.774 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:50.775 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.776 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:50.777 08:53:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:50.777 08:53:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:50.777 08:53:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:50.777 08:53:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.777 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.778 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.779 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.780 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:50.781 08:53:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.043 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.044 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.045 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:51.046 08:53:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:51.046 08:53:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:51.046 08:53:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.046 08:53:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:51.046 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.047 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.048 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:51.049 08:53:44 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:51.049 08:53:44 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:51.050 08:53:44 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:51.050 08:53:44 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:51.050 08:53:44 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:51.050 08:53:44 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:51.050 08:53:44 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:51.050 08:53:44 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:51.050 08:53:44 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:51.050 08:53:44 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:51.050 08:53:44 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:51.050 08:53:44 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:51.050 08:53:44 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:51.050 08:53:45 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:51.050 08:53:45 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:51.050 08:53:45 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:51.050 08:53:45 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:51.614 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:51.871 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:51.871 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:52.129 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:52.129 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:52.129 08:53:46 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:52.129 08:53:46 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:52.129 08:53:46 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.129 08:53:46 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:52.129 ************************************ 00:09:52.129 START TEST nvme_flexible_data_placement 00:09:52.129 ************************************ 00:09:52.129 08:53:46 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:52.388 Initializing NVMe Controllers 00:09:52.388 Attaching to 0000:00:13.0 00:09:52.388 Controller supports FDP Attached to 0000:00:13.0 00:09:52.388 Namespace ID: 1 Endurance Group ID: 1 00:09:52.388 Initialization complete. 00:09:52.388 00:09:52.388 ================================== 00:09:52.388 == FDP tests for Namespace: #01 == 00:09:52.388 ================================== 00:09:52.388 00:09:52.388 Get Feature: FDP: 00:09:52.388 ================= 00:09:52.388 Enabled: Yes 00:09:52.388 FDP configuration Index: 0 00:09:52.388 00:09:52.388 FDP configurations log page 00:09:52.388 =========================== 00:09:52.388 Number of FDP configurations: 1 00:09:52.388 Version: 0 00:09:52.388 Size: 112 00:09:52.388 FDP Configuration Descriptor: 0 00:09:52.388 Descriptor Size: 96 00:09:52.388 Reclaim Group Identifier format: 2 00:09:52.388 FDP Volatile Write Cache: Not Present 00:09:52.388 FDP Configuration: Valid 00:09:52.388 Vendor Specific Size: 0 00:09:52.388 Number of Reclaim Groups: 2 00:09:52.388 Number of Recalim Unit Handles: 8 00:09:52.388 Max Placement Identifiers: 128 00:09:52.388 Number of Namespaces Suppprted: 256 00:09:52.388 Reclaim unit Nominal Size: 6000000 bytes 00:09:52.388 Estimated Reclaim Unit Time Limit: Not Reported 00:09:52.388 RUH Desc #000: RUH Type: Initially Isolated 00:09:52.388 RUH Desc #001: RUH Type: Initially Isolated 00:09:52.388 RUH Desc #002: RUH Type: Initially Isolated 00:09:52.388 RUH Desc #003: RUH Type: Initially Isolated 00:09:52.388 RUH Desc #004: RUH Type: Initially Isolated 00:09:52.388 RUH Desc #005: RUH Type: Initially Isolated 00:09:52.388 RUH Desc #006: RUH Type: Initially Isolated 00:09:52.388 RUH Desc #007: RUH Type: Initially Isolated 00:09:52.388 00:09:52.388 FDP reclaim unit handle usage log page 00:09:52.388 ====================================== 00:09:52.388 Number of Reclaim Unit Handles: 8 00:09:52.388 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:52.388 RUH Usage Desc #001: RUH Attributes: Unused 00:09:52.388 RUH Usage Desc #002: RUH Attributes: Unused 00:09:52.388 RUH Usage Desc #003: RUH Attributes: Unused 00:09:52.388 RUH Usage Desc #004: RUH Attributes: Unused 00:09:52.388 RUH Usage Desc #005: RUH Attributes: Unused 00:09:52.389 RUH Usage Desc #006: RUH Attributes: Unused 00:09:52.389 RUH Usage Desc #007: RUH Attributes: Unused 00:09:52.389 00:09:52.389 FDP statistics log page 00:09:52.389 ======================= 00:09:52.389 Host bytes with metadata written: 1929097216 00:09:52.389 Media bytes with metadata written: 1930129408 00:09:52.389 Media bytes erased: 0 00:09:52.389 00:09:52.389 FDP Reclaim unit handle status 00:09:52.389 ============================== 00:09:52.389 Number of RUHS descriptors: 2 00:09:52.389 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005045 00:09:52.389 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:52.389 00:09:52.389 FDP write on placement id: 0 success 00:09:52.389 00:09:52.389 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:52.389 00:09:52.389 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:52.389 00:09:52.389 Get Feature: FDP Events for Placement handle: #0 00:09:52.389 ======================== 00:09:52.389 Number of FDP Events: 6 00:09:52.389 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:52.389 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:52.389 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:52.389 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:52.389 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:52.389 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:52.389 00:09:52.389 FDP events log page 00:09:52.389 =================== 00:09:52.389 Number of FDP events: 1 00:09:52.389 FDP Event #0: 00:09:52.389 Event Type: RU Not Written to Capacity 00:09:52.389 Placement Identifier: Valid 00:09:52.389 NSID: Valid 00:09:52.389 Location: Valid 00:09:52.389 Placement Identifier: 0 00:09:52.389 Event Timestamp: 4 00:09:52.389 Namespace Identifier: 1 00:09:52.389 Reclaim Group Identifier: 0 00:09:52.389 Reclaim Unit Handle Identifier: 0 00:09:52.389 00:09:52.389 FDP test passed 00:09:52.389 00:09:52.389 real 0m0.218s 00:09:52.389 user 0m0.064s 00:09:52.389 sys 0m0.053s 00:09:52.389 08:53:46 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.389 ************************************ 00:09:52.389 08:53:46 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:52.389 END TEST nvme_flexible_data_placement 00:09:52.389 ************************************ 00:09:52.389 ************************************ 00:09:52.389 END TEST nvme_fdp 00:09:52.389 ************************************ 00:09:52.389 00:09:52.389 real 0m7.693s 00:09:52.389 user 0m1.027s 00:09:52.389 sys 0m1.412s 00:09:52.389 08:53:46 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.389 08:53:46 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:52.389 08:53:46 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:52.389 08:53:46 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:52.389 08:53:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:52.389 08:53:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.389 08:53:46 -- common/autotest_common.sh@10 -- # set +x 00:09:52.389 ************************************ 00:09:52.389 START TEST nvme_rpc 00:09:52.389 ************************************ 00:09:52.389 08:53:46 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:52.389 * Looking for test storage... 00:09:52.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:52.389 08:53:46 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:52.389 08:53:46 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:52.389 08:53:46 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:52.647 08:53:46 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.647 08:53:46 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:52.648 08:53:46 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:52.648 08:53:46 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.648 08:53:46 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:52.648 08:53:46 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.648 08:53:46 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.648 08:53:46 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.648 08:53:46 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:52.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.648 --rc genhtml_branch_coverage=1 00:09:52.648 --rc genhtml_function_coverage=1 00:09:52.648 --rc genhtml_legend=1 00:09:52.648 --rc geninfo_all_blocks=1 00:09:52.648 --rc geninfo_unexecuted_blocks=1 00:09:52.648 00:09:52.648 ' 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:52.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.648 --rc genhtml_branch_coverage=1 00:09:52.648 --rc genhtml_function_coverage=1 00:09:52.648 --rc genhtml_legend=1 00:09:52.648 --rc geninfo_all_blocks=1 00:09:52.648 --rc geninfo_unexecuted_blocks=1 00:09:52.648 00:09:52.648 ' 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:52.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.648 --rc genhtml_branch_coverage=1 00:09:52.648 --rc genhtml_function_coverage=1 00:09:52.648 --rc genhtml_legend=1 00:09:52.648 --rc geninfo_all_blocks=1 00:09:52.648 --rc geninfo_unexecuted_blocks=1 00:09:52.648 00:09:52.648 ' 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:52.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.648 --rc genhtml_branch_coverage=1 00:09:52.648 --rc genhtml_function_coverage=1 00:09:52.648 --rc genhtml_legend=1 00:09:52.648 --rc geninfo_all_blocks=1 00:09:52.648 --rc geninfo_unexecuted_blocks=1 00:09:52.648 00:09:52.648 ' 00:09:52.648 08:53:46 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.648 08:53:46 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:52.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.648 08:53:46 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:52.648 08:53:46 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:52.648 08:53:46 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78048 00:09:52.648 08:53:46 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:52.648 08:53:46 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78048 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 78048 ']' 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.648 08:53:46 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.648 [2024-11-28 08:53:46.705713] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:52.648 [2024-11-28 08:53:46.705994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78048 ] 00:09:52.906 [2024-11-28 08:53:46.856754] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:52.906 [2024-11-28 08:53:46.889997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.906 [2024-11-28 08:53:46.890041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.472 08:53:47 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.472 08:53:47 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:53.472 08:53:47 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:53.730 Nvme0n1 00:09:53.730 08:53:47 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:53.730 08:53:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:53.990 request: 00:09:53.990 { 00:09:53.990 "bdev_name": "Nvme0n1", 00:09:53.990 "filename": "non_existing_file", 00:09:53.990 "method": "bdev_nvme_apply_firmware", 00:09:53.990 "req_id": 1 00:09:53.990 } 00:09:53.990 Got JSON-RPC error response 00:09:53.990 response: 00:09:53.990 { 00:09:53.990 "code": -32603, 00:09:53.990 "message": "open file failed." 00:09:53.990 } 00:09:53.990 08:53:48 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:53.990 08:53:48 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:53.990 08:53:48 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:54.252 08:53:48 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:54.252 08:53:48 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 78048 00:09:54.252 08:53:48 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 78048 ']' 00:09:54.252 08:53:48 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 78048 00:09:54.252 08:53:48 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:54.252 08:53:48 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.252 08:53:48 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78048 00:09:54.252 killing process with pid 78048 00:09:54.252 08:53:48 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.252 08:53:48 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.252 08:53:48 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78048' 00:09:54.252 08:53:48 nvme_rpc -- common/autotest_common.sh@969 -- # kill 78048 00:09:54.252 08:53:48 nvme_rpc -- common/autotest_common.sh@974 -- # wait 78048 00:09:54.513 ************************************ 00:09:54.513 END TEST nvme_rpc 00:09:54.513 ************************************ 00:09:54.513 00:09:54.513 real 0m2.145s 00:09:54.513 user 0m4.157s 00:09:54.513 sys 0m0.473s 00:09:54.513 08:53:48 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.513 08:53:48 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.513 08:53:48 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:54.513 08:53:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:54.513 08:53:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.513 08:53:48 -- common/autotest_common.sh@10 -- # set +x 00:09:54.775 ************************************ 00:09:54.775 START TEST nvme_rpc_timeouts 00:09:54.775 ************************************ 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:54.775 * Looking for test storage... 00:09:54.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.775 08:53:48 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:54.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.775 --rc genhtml_branch_coverage=1 00:09:54.775 --rc genhtml_function_coverage=1 00:09:54.775 --rc genhtml_legend=1 00:09:54.775 --rc geninfo_all_blocks=1 00:09:54.775 --rc geninfo_unexecuted_blocks=1 00:09:54.775 00:09:54.775 ' 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:54.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.775 --rc genhtml_branch_coverage=1 00:09:54.775 --rc genhtml_function_coverage=1 00:09:54.775 --rc genhtml_legend=1 00:09:54.775 --rc geninfo_all_blocks=1 00:09:54.775 --rc geninfo_unexecuted_blocks=1 00:09:54.775 00:09:54.775 ' 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:54.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.775 --rc genhtml_branch_coverage=1 00:09:54.775 --rc genhtml_function_coverage=1 00:09:54.775 --rc genhtml_legend=1 00:09:54.775 --rc geninfo_all_blocks=1 00:09:54.775 --rc geninfo_unexecuted_blocks=1 00:09:54.775 00:09:54.775 ' 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:54.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.775 --rc genhtml_branch_coverage=1 00:09:54.775 --rc genhtml_function_coverage=1 00:09:54.775 --rc genhtml_legend=1 00:09:54.775 --rc geninfo_all_blocks=1 00:09:54.775 --rc geninfo_unexecuted_blocks=1 00:09:54.775 00:09:54.775 ' 00:09:54.775 08:53:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:54.775 08:53:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78102 00:09:54.775 08:53:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78102 00:09:54.775 08:53:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78134 00:09:54.775 08:53:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:54.775 08:53:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78134 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 78134 ']' 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.775 08:53:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.775 08:53:48 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:55.037 [2024-11-28 08:53:48.899255] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:55.037 [2024-11-28 08:53:48.899663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78134 ] 00:09:55.037 [2024-11-28 08:53:49.056523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:55.037 [2024-11-28 08:53:49.107464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.037 [2024-11-28 08:53:49.107517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.981 08:53:49 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.981 08:53:49 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:55.981 Checking default timeout settings: 00:09:55.981 08:53:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:55.981 08:53:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:55.981 Making settings changes with rpc: 00:09:55.981 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:55.981 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:56.243 Check default vs. modified settings: 00:09:56.243 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:56.243 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78102 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78102 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:56.814 Setting action_on_timeout is changed as expected. 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78102 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78102 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:56.814 Setting timeout_us is changed as expected. 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78102 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78102 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:56.814 Setting timeout_admin_us is changed as expected. 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78102 /tmp/settings_modified_78102 00:09:56.814 08:53:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78134 00:09:56.814 08:53:50 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 78134 ']' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 78134 00:09:56.814 08:53:50 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:56.814 08:53:50 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78134 00:09:56.814 killing process with pid 78134 00:09:56.814 08:53:50 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:56.814 08:53:50 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78134' 00:09:56.814 08:53:50 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 78134 00:09:56.814 08:53:50 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 78134 00:09:57.076 RPC TIMEOUT SETTING TEST PASSED. 00:09:57.076 08:53:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:57.076 00:09:57.076 real 0m2.434s 00:09:57.076 user 0m4.695s 00:09:57.076 sys 0m0.612s 00:09:57.076 08:53:51 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.076 ************************************ 00:09:57.076 END TEST nvme_rpc_timeouts 00:09:57.076 ************************************ 00:09:57.076 08:53:51 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:57.076 08:53:51 -- spdk/autotest.sh@239 -- # uname -s 00:09:57.076 08:53:51 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:57.076 08:53:51 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:57.076 08:53:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:57.076 08:53:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.076 08:53:51 -- common/autotest_common.sh@10 -- # set +x 00:09:57.076 ************************************ 00:09:57.076 START TEST sw_hotplug 00:09:57.076 ************************************ 00:09:57.076 08:53:51 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:57.338 * Looking for test storage... 00:09:57.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:57.338 08:53:51 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:57.338 08:53:51 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:09:57.338 08:53:51 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:57.338 08:53:51 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.338 08:53:51 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:57.338 08:53:51 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.338 08:53:51 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.338 --rc genhtml_branch_coverage=1 00:09:57.338 --rc genhtml_function_coverage=1 00:09:57.338 --rc genhtml_legend=1 00:09:57.338 --rc geninfo_all_blocks=1 00:09:57.338 --rc geninfo_unexecuted_blocks=1 00:09:57.338 00:09:57.338 ' 00:09:57.338 08:53:51 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.338 --rc genhtml_branch_coverage=1 00:09:57.338 --rc genhtml_function_coverage=1 00:09:57.338 --rc genhtml_legend=1 00:09:57.338 --rc geninfo_all_blocks=1 00:09:57.338 --rc geninfo_unexecuted_blocks=1 00:09:57.338 00:09:57.338 ' 00:09:57.338 08:53:51 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.338 --rc genhtml_branch_coverage=1 00:09:57.338 --rc genhtml_function_coverage=1 00:09:57.338 --rc genhtml_legend=1 00:09:57.338 --rc geninfo_all_blocks=1 00:09:57.338 --rc geninfo_unexecuted_blocks=1 00:09:57.338 00:09:57.338 ' 00:09:57.338 08:53:51 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:57.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.338 --rc genhtml_branch_coverage=1 00:09:57.338 --rc genhtml_function_coverage=1 00:09:57.338 --rc genhtml_legend=1 00:09:57.338 --rc geninfo_all_blocks=1 00:09:57.338 --rc geninfo_unexecuted_blocks=1 00:09:57.338 00:09:57.338 ' 00:09:57.338 08:53:51 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:57.599 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.861 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:57.861 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:57.861 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:57.861 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:57.861 08:53:51 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:57.861 08:53:51 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:57.861 08:53:51 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:57.861 08:53:51 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:57.861 08:53:51 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:57.861 08:53:51 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:57.861 08:53:51 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:57.861 08:53:51 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:58.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:58.385 Waiting for block devices as requested 00:09:58.385 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.385 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.646 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.647 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:03.952 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:03.952 08:53:57 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:03.952 08:53:57 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:04.212 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:04.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:04.212 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:04.472 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:04.733 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:04.733 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:04.733 08:53:58 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:04.733 08:53:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:04.994 08:53:58 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:04.994 08:53:58 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:04.994 08:53:58 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78981 00:10:04.994 08:53:58 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:04.994 08:53:58 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:04.994 08:53:58 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:04.995 08:53:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:04.995 08:53:58 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:04.995 08:53:58 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:04.995 08:53:58 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:04.995 08:53:58 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:04.995 08:53:58 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:04.995 08:53:58 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:04.995 08:53:58 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:04.995 08:53:58 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:04.995 08:53:58 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:04.995 08:53:58 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:05.256 Initializing NVMe Controllers 00:10:05.256 Attaching to 0000:00:10.0 00:10:05.256 Attaching to 0000:00:11.0 00:10:05.256 Attached to 0000:00:11.0 00:10:05.256 Attached to 0000:00:10.0 00:10:05.256 Initialization complete. Starting I/O... 00:10:05.256 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:05.256 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:05.256 00:10:06.239 QEMU NVMe Ctrl (12341 ): 2592 I/Os completed (+2592) 00:10:06.239 QEMU NVMe Ctrl (12340 ): 2594 I/Os completed (+2594) 00:10:06.239 00:10:07.179 QEMU NVMe Ctrl (12341 ): 5832 I/Os completed (+3240) 00:10:07.179 QEMU NVMe Ctrl (12340 ): 5838 I/Os completed (+3244) 00:10:07.179 00:10:08.118 QEMU NVMe Ctrl (12341 ): 9060 I/Os completed (+3228) 00:10:08.118 QEMU NVMe Ctrl (12340 ): 9066 I/Os completed (+3228) 00:10:08.118 00:10:09.057 QEMU NVMe Ctrl (12341 ): 12189 I/Os completed (+3129) 00:10:09.057 QEMU NVMe Ctrl (12340 ): 12345 I/Os completed (+3279) 00:10:09.057 00:10:10.441 QEMU NVMe Ctrl (12341 ): 15178 I/Os completed (+2989) 00:10:10.441 QEMU NVMe Ctrl (12340 ): 15336 I/Os completed (+2991) 00:10:10.441 00:10:11.010 08:54:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:11.010 08:54:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:11.010 08:54:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:11.010 [2024-11-28 08:54:04.938476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:11.010 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:11.010 [2024-11-28 08:54:04.941235] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 [2024-11-28 08:54:04.941338] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 [2024-11-28 08:54:04.941358] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 [2024-11-28 08:54:04.941377] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:11.010 [2024-11-28 08:54:04.942816] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 [2024-11-28 08:54:04.942868] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 [2024-11-28 08:54:04.942883] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 [2024-11-28 08:54:04.942899] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 08:54:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:11.010 08:54:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:11.010 [2024-11-28 08:54:04.967987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:11.010 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:11.010 [2024-11-28 08:54:04.969102] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 [2024-11-28 08:54:04.969156] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 [2024-11-28 08:54:04.969177] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 [2024-11-28 08:54:04.969193] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:11.010 [2024-11-28 08:54:04.970434] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 [2024-11-28 08:54:04.970476] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 [2024-11-28 08:54:04.970496] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 [2024-11-28 08:54:04.970509] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.010 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:11.010 EAL: Scan for (pci) bus failed. 00:10:11.010 08:54:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:11.010 08:54:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:11.010 08:54:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:11.010 08:54:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:11.010 08:54:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:11.010 00:10:11.270 08:54:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:11.271 08:54:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:11.271 08:54:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:11.271 08:54:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:11.271 08:54:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:11.271 Attaching to 0000:00:10.0 00:10:11.271 Attached to 0000:00:10.0 00:10:11.271 08:54:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:11.271 08:54:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:11.271 08:54:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:11.271 Attaching to 0000:00:11.0 00:10:11.271 Attached to 0000:00:11.0 00:10:12.213 QEMU NVMe Ctrl (12340 ): 2496 I/Os completed (+2496) 00:10:12.213 QEMU NVMe Ctrl (12341 ): 2282 I/Os completed (+2282) 00:10:12.213 00:10:13.155 QEMU NVMe Ctrl (12340 ): 5244 I/Os completed (+2748) 00:10:13.155 QEMU NVMe Ctrl (12341 ): 5030 I/Os completed (+2748) 00:10:13.155 00:10:14.089 QEMU NVMe Ctrl (12340 ): 9328 I/Os completed (+4084) 00:10:14.089 QEMU NVMe Ctrl (12341 ): 9114 I/Os completed (+4084) 00:10:14.089 00:10:15.023 QEMU NVMe Ctrl (12340 ): 13659 I/Os completed (+4331) 00:10:15.023 QEMU NVMe Ctrl (12341 ): 13387 I/Os completed (+4273) 00:10:15.023 00:10:16.397 QEMU NVMe Ctrl (12340 ): 17958 I/Os completed (+4299) 00:10:16.397 QEMU NVMe Ctrl (12341 ): 17701 I/Os completed (+4314) 00:10:16.397 00:10:17.330 QEMU NVMe Ctrl (12340 ): 22311 I/Os completed (+4353) 00:10:17.330 QEMU NVMe Ctrl (12341 ): 22068 I/Os completed (+4367) 00:10:17.330 00:10:18.273 QEMU NVMe Ctrl (12340 ): 26086 I/Os completed (+3775) 00:10:18.273 QEMU NVMe Ctrl (12341 ): 25779 I/Os completed (+3711) 00:10:18.273 00:10:19.214 QEMU NVMe Ctrl (12340 ): 28974 I/Os completed (+2888) 00:10:19.214 QEMU NVMe Ctrl (12341 ): 28667 I/Os completed (+2888) 00:10:19.214 00:10:20.157 QEMU NVMe Ctrl (12340 ): 31982 I/Os completed (+3008) 00:10:20.157 QEMU NVMe Ctrl (12341 ): 31675 I/Os completed (+3008) 00:10:20.157 00:10:21.100 QEMU NVMe Ctrl (12340 ): 35845 I/Os completed (+3863) 00:10:21.100 QEMU NVMe Ctrl (12341 ): 35528 I/Os completed (+3853) 00:10:21.100 00:10:22.042 QEMU NVMe Ctrl (12340 ): 40455 I/Os completed (+4610) 00:10:22.042 QEMU NVMe Ctrl (12341 ): 40111 I/Os completed (+4583) 00:10:22.042 00:10:23.429 QEMU NVMe Ctrl (12340 ): 45113 I/Os completed (+4658) 00:10:23.429 QEMU NVMe Ctrl (12341 ): 44796 I/Os completed (+4685) 00:10:23.429 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:23.430 [2024-11-28 08:54:17.280681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:23.430 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:23.430 [2024-11-28 08:54:17.281490] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 [2024-11-28 08:54:17.281526] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 [2024-11-28 08:54:17.281538] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 [2024-11-28 08:54:17.281552] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:23.430 [2024-11-28 08:54:17.282544] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 [2024-11-28 08:54:17.282568] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 [2024-11-28 08:54:17.282577] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 [2024-11-28 08:54:17.282590] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:23.430 [2024-11-28 08:54:17.305567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:23.430 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:23.430 [2024-11-28 08:54:17.306292] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 [2024-11-28 08:54:17.306322] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 [2024-11-28 08:54:17.306334] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 [2024-11-28 08:54:17.306345] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:23.430 [2024-11-28 08:54:17.307154] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 [2024-11-28 08:54:17.307178] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 [2024-11-28 08:54:17.307193] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 [2024-11-28 08:54:17.307204] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.430 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:23.430 EAL: Scan for (pci) bus failed. 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:23.430 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:23.430 Attaching to 0000:00:10.0 00:10:23.430 Attached to 0000:00:10.0 00:10:23.691 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:23.691 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:23.691 08:54:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:23.691 Attaching to 0000:00:11.0 00:10:23.691 Attached to 0000:00:11.0 00:10:24.264 QEMU NVMe Ctrl (12340 ): 3084 I/Os completed (+3084) 00:10:24.264 QEMU NVMe Ctrl (12341 ): 2696 I/Os completed (+2696) 00:10:24.264 00:10:25.204 QEMU NVMe Ctrl (12340 ): 7699 I/Os completed (+4615) 00:10:25.204 QEMU NVMe Ctrl (12341 ): 7308 I/Os completed (+4612) 00:10:25.204 00:10:26.146 QEMU NVMe Ctrl (12340 ): 12316 I/Os completed (+4617) 00:10:26.146 QEMU NVMe Ctrl (12341 ): 11955 I/Os completed (+4647) 00:10:26.146 00:10:27.089 QEMU NVMe Ctrl (12340 ): 15691 I/Os completed (+3375) 00:10:27.089 QEMU NVMe Ctrl (12341 ): 15364 I/Os completed (+3409) 00:10:27.089 00:10:28.030 QEMU NVMe Ctrl (12340 ): 18974 I/Os completed (+3283) 00:10:28.030 QEMU NVMe Ctrl (12341 ): 18554 I/Os completed (+3190) 00:10:28.030 00:10:29.450 QEMU NVMe Ctrl (12340 ): 23174 I/Os completed (+4200) 00:10:29.450 QEMU NVMe Ctrl (12341 ): 22675 I/Os completed (+4121) 00:10:29.450 00:10:30.020 QEMU NVMe Ctrl (12340 ): 26824 I/Os completed (+3650) 00:10:30.020 QEMU NVMe Ctrl (12341 ): 26283 I/Os completed (+3608) 00:10:30.020 00:10:31.398 QEMU NVMe Ctrl (12340 ): 31473 I/Os completed (+4649) 00:10:31.398 QEMU NVMe Ctrl (12341 ): 30770 I/Os completed (+4487) 00:10:31.398 00:10:32.340 QEMU NVMe Ctrl (12340 ): 35905 I/Os completed (+4432) 00:10:32.340 QEMU NVMe Ctrl (12341 ): 35183 I/Os completed (+4413) 00:10:32.340 00:10:33.275 QEMU NVMe Ctrl (12340 ): 38871 I/Os completed (+2966) 00:10:33.275 QEMU NVMe Ctrl (12341 ): 38207 I/Os completed (+3024) 00:10:33.275 00:10:34.213 QEMU NVMe Ctrl (12340 ): 42070 I/Os completed (+3199) 00:10:34.213 QEMU NVMe Ctrl (12341 ): 41540 I/Os completed (+3333) 00:10:34.213 00:10:35.151 QEMU NVMe Ctrl (12340 ): 45021 I/Os completed (+2951) 00:10:35.151 QEMU NVMe Ctrl (12341 ): 44491 I/Os completed (+2951) 00:10:35.151 00:10:35.718 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:35.718 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:35.718 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:35.718 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:35.718 [2024-11-28 08:54:29.567629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:35.718 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:35.718 [2024-11-28 08:54:29.568690] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.718 [2024-11-28 08:54:29.568733] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.718 [2024-11-28 08:54:29.568753] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.718 [2024-11-28 08:54:29.568773] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.718 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:35.718 [2024-11-28 08:54:29.570397] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.718 [2024-11-28 08:54:29.570435] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.718 [2024-11-28 08:54:29.570450] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.718 [2024-11-28 08:54:29.570464] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.718 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:35.718 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:35.718 [2024-11-28 08:54:29.588791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:35.718 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:35.718 [2024-11-28 08:54:29.591809] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.718 [2024-11-28 08:54:29.591847] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.719 [2024-11-28 08:54:29.591864] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.719 [2024-11-28 08:54:29.591876] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.719 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:35.719 [2024-11-28 08:54:29.592989] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.719 [2024-11-28 08:54:29.593020] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.719 [2024-11-28 08:54:29.593035] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.719 [2024-11-28 08:54:29.593047] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:35.719 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:35.719 EAL: Scan for (pci) bus failed. 00:10:35.719 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:35.719 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:35.719 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:35.719 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:35.719 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:35.719 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:35.719 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:35.719 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:35.719 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:35.719 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:35.719 Attaching to 0000:00:10.0 00:10:35.719 Attached to 0000:00:10.0 00:10:35.979 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:35.979 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:35.979 08:54:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:35.979 Attaching to 0000:00:11.0 00:10:35.979 Attached to 0000:00:11.0 00:10:35.979 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:35.979 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:35.979 [2024-11-28 08:54:29.865887] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:48.207 08:54:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:48.207 08:54:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:48.207 08:54:41 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.93 00:10:48.207 08:54:41 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.93 00:10:48.207 08:54:41 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:48.207 08:54:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.93 00:10:48.207 08:54:41 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.93 2 00:10:48.207 remove_attach_helper took 42.93s to complete (handling 2 nvme drive(s)) 08:54:41 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:54.800 08:54:47 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78981 00:10:54.800 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78981) - No such process 00:10:54.800 08:54:47 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78981 00:10:54.800 08:54:47 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:54.800 08:54:47 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:54.800 08:54:47 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:54.800 08:54:47 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79531 00:10:54.800 08:54:47 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:54.800 08:54:47 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79531 00:10:54.800 08:54:47 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 79531 ']' 00:10:54.800 08:54:47 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.800 08:54:47 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.800 08:54:47 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.800 08:54:47 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.800 08:54:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.800 08:54:47 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:54.800 [2024-11-28 08:54:47.970590] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:54.800 [2024-11-28 08:54:47.970757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79531 ] 00:10:54.800 [2024-11-28 08:54:48.128036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.800 [2024-11-28 08:54:48.199714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.800 08:54:48 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.800 08:54:48 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:54.800 08:54:48 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:54.800 08:54:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.800 08:54:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.800 08:54:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.800 08:54:48 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:54.800 08:54:48 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:54.800 08:54:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:54.800 08:54:48 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:54.800 08:54:48 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:54.800 08:54:48 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:54.800 08:54:48 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:54.800 08:54:48 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:54.800 08:54:48 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:54.800 08:54:48 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:54.800 08:54:48 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:54.800 08:54:48 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:54.800 08:54:48 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:01.362 08:54:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:01.362 08:54:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:01.362 08:54:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:01.362 08:54:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:01.362 08:54:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:01.362 08:54:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:01.362 08:54:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:01.362 08:54:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:01.362 08:54:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:01.362 08:54:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:01.362 08:54:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:01.362 08:54:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.362 08:54:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:01.362 08:54:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.362 08:54:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:01.362 08:54:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:01.362 [2024-11-28 08:54:54.916837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:01.362 [2024-11-28 08:54:54.918009] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.362 [2024-11-28 08:54:54.918048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.362 [2024-11-28 08:54:54.918064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.362 [2024-11-28 08:54:54.918078] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.362 [2024-11-28 08:54:54.918087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.362 [2024-11-28 08:54:54.918094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.362 [2024-11-28 08:54:54.918105] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.362 [2024-11-28 08:54:54.918112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.362 [2024-11-28 08:54:54.918120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.362 [2024-11-28 08:54:54.918126] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.362 [2024-11-28 08:54:54.918134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.362 [2024-11-28 08:54:54.918141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.362 [2024-11-28 08:54:55.316831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:01.362 [2024-11-28 08:54:55.317961] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.362 [2024-11-28 08:54:55.317991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.362 [2024-11-28 08:54:55.318001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.362 [2024-11-28 08:54:55.318012] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.362 [2024-11-28 08:54:55.318019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.362 [2024-11-28 08:54:55.318028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.362 [2024-11-28 08:54:55.318034] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.362 [2024-11-28 08:54:55.318042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.363 [2024-11-28 08:54:55.318049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.363 [2024-11-28 08:54:55.318059] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.363 [2024-11-28 08:54:55.318065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:01.363 [2024-11-28 08:54:55.318073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:01.363 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:01.363 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:01.363 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:01.363 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:01.363 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:01.363 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:01.363 08:54:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.363 08:54:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:01.363 08:54:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.363 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:01.363 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:01.619 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.619 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.619 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:01.619 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:01.619 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.619 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.619 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.619 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:01.619 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:01.619 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.619 08:54:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.812 08:55:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.812 08:55:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.812 08:55:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.812 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.812 08:55:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.812 08:55:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.812 [2024-11-28 08:55:07.817052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:13.812 [2024-11-28 08:55:07.818174] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.812 [2024-11-28 08:55:07.818296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.812 [2024-11-28 08:55:07.818315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.812 [2024-11-28 08:55:07.818328] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.812 [2024-11-28 08:55:07.818337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.812 [2024-11-28 08:55:07.818345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.812 [2024-11-28 08:55:07.818354] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.813 [2024-11-28 08:55:07.818361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.813 [2024-11-28 08:55:07.818369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.813 [2024-11-28 08:55:07.818376] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.813 [2024-11-28 08:55:07.818385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.813 [2024-11-28 08:55:07.818391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.813 08:55:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.813 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:13.813 08:55:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:14.379 [2024-11-28 08:55:08.217052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:14.379 [2024-11-28 08:55:08.218244] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.379 [2024-11-28 08:55:08.218275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.379 [2024-11-28 08:55:08.218285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.379 [2024-11-28 08:55:08.218296] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.379 [2024-11-28 08:55:08.218303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.379 [2024-11-28 08:55:08.218312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.379 [2024-11-28 08:55:08.218318] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.379 [2024-11-28 08:55:08.218326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.379 [2024-11-28 08:55:08.218333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.379 [2024-11-28 08:55:08.218341] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.379 [2024-11-28 08:55:08.218347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.379 [2024-11-28 08:55:08.218355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.379 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:14.379 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:14.379 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:14.379 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.379 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.379 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.379 08:55:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.379 08:55:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.379 08:55:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.379 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:14.379 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:14.379 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.379 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.379 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:14.379 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:14.637 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.637 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.637 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.637 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:14.637 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:14.637 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.637 08:55:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.833 08:55:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.833 08:55:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.833 08:55:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.833 08:55:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.833 08:55:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.833 08:55:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:26.833 08:55:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:26.833 [2024-11-28 08:55:20.717295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:26.833 [2024-11-28 08:55:20.718427] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.833 [2024-11-28 08:55:20.718459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.833 [2024-11-28 08:55:20.718473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.833 [2024-11-28 08:55:20.718486] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.833 [2024-11-28 08:55:20.718495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.833 [2024-11-28 08:55:20.718503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.833 [2024-11-28 08:55:20.718511] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.833 [2024-11-28 08:55:20.718518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.833 [2024-11-28 08:55:20.718525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.833 [2024-11-28 08:55:20.718532] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.833 [2024-11-28 08:55:20.718539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.833 [2024-11-28 08:55:20.718546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.091 [2024-11-28 08:55:21.117296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:27.091 [2024-11-28 08:55:21.118401] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.091 [2024-11-28 08:55:21.118431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.091 [2024-11-28 08:55:21.118441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.091 [2024-11-28 08:55:21.118451] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.091 [2024-11-28 08:55:21.118458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.091 [2024-11-28 08:55:21.118468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.091 [2024-11-28 08:55:21.118475] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.091 [2024-11-28 08:55:21.118482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.091 [2024-11-28 08:55:21.118489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.091 [2024-11-28 08:55:21.118497] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.091 [2024-11-28 08:55:21.118503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.091 [2024-11-28 08:55:21.118511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.091 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:27.092 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:27.092 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:27.092 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.092 08:55:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.092 08:55:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.092 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:27.092 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:27.350 08:55:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.350 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:27.350 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:27.350 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.350 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.350 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:27.350 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:27.350 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.350 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.350 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.350 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:27.608 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:27.608 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.608 08:55:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:39.808 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:39.808 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:39.808 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:39.808 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.808 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.808 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.808 08:55:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.808 08:55:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.808 08:55:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.808 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:39.808 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:39.808 08:55:33 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.72 00:11:39.808 08:55:33 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.72 00:11:39.808 08:55:33 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:39.808 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.72 00:11:39.808 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.72 2 00:11:39.808 remove_attach_helper took 44.72s to complete (handling 2 nvme drive(s)) 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:39.808 08:55:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.808 08:55:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.808 08:55:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.809 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:39.809 08:55:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.809 08:55:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.809 08:55:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.809 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:39.809 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:39.809 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:39.809 08:55:33 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:39.809 08:55:33 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:39.809 08:55:33 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:39.809 08:55:33 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:39.809 08:55:33 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:39.809 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:39.809 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:39.809 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:39.809 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:39.809 08:55:33 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:46.369 08:55:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:46.369 08:55:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.369 08:55:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.369 08:55:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:46.369 08:55:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:46.369 08:55:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:46.369 08:55:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:46.369 08:55:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:46.369 08:55:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.369 08:55:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.369 08:55:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.369 08:55:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.369 08:55:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.369 08:55:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.369 08:55:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:46.369 08:55:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:46.369 [2024-11-28 08:55:39.664840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:46.369 [2024-11-28 08:55:39.665682] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.369 [2024-11-28 08:55:39.665707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.369 [2024-11-28 08:55:39.665721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.369 [2024-11-28 08:55:39.665734] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.369 [2024-11-28 08:55:39.665743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.369 [2024-11-28 08:55:39.665750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.369 [2024-11-28 08:55:39.665759] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.369 [2024-11-28 08:55:39.665765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.369 [2024-11-28 08:55:39.665778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.369 [2024-11-28 08:55:39.665784] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.369 [2024-11-28 08:55:39.665793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.369 [2024-11-28 08:55:39.665809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.369 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:46.369 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:46.369 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:46.369 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.369 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.369 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.369 08:55:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.369 08:55:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.369 08:55:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.369 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:46.369 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:46.369 [2024-11-28 08:55:40.264853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:46.369 [2024-11-28 08:55:40.265677] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.369 [2024-11-28 08:55:40.265711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.369 [2024-11-28 08:55:40.265722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.369 [2024-11-28 08:55:40.265736] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.369 [2024-11-28 08:55:40.265744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.369 [2024-11-28 08:55:40.265753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.369 [2024-11-28 08:55:40.265759] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.369 [2024-11-28 08:55:40.265769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.369 [2024-11-28 08:55:40.265775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.369 [2024-11-28 08:55:40.265784] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.369 [2024-11-28 08:55:40.265790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.369 [2024-11-28 08:55:40.265821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.628 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:46.628 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:46.628 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:46.628 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.628 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.628 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.628 08:55:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.628 08:55:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.628 08:55:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.628 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:46.628 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:46.887 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:46.887 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:46.887 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:46.887 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:46.887 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:46.887 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:46.887 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:46.887 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:46.887 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:46.887 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:46.887 08:55:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:59.088 08:55:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:59.088 08:55:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:59.088 08:55:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:59.088 08:55:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:59.088 08:55:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:59.088 08:55:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:59.088 08:55:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.088 08:55:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:59.088 08:55:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.088 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:59.088 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:59.088 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:59.088 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:59.088 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:59.088 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:59.088 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:59.088 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:59.088 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:59.088 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:59.088 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:59.088 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:59.088 08:55:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.089 08:55:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:59.089 [2024-11-28 08:55:53.065068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:59.089 [2024-11-28 08:55:53.066110] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.089 [2024-11-28 08:55:53.066544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.089 [2024-11-28 08:55:53.066565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.089 [2024-11-28 08:55:53.066579] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.089 [2024-11-28 08:55:53.066588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.089 [2024-11-28 08:55:53.066596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.089 [2024-11-28 08:55:53.066604] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.089 [2024-11-28 08:55:53.066612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.089 [2024-11-28 08:55:53.066620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.089 [2024-11-28 08:55:53.066626] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.089 [2024-11-28 08:55:53.066634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.089 [2024-11-28 08:55:53.066641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.089 08:55:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.089 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:59.089 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:59.347 [2024-11-28 08:55:53.465078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:59.606 [2024-11-28 08:55:53.466264] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.606 [2024-11-28 08:55:53.466299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.606 [2024-11-28 08:55:53.466309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.606 [2024-11-28 08:55:53.466320] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.606 [2024-11-28 08:55:53.466326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.606 [2024-11-28 08:55:53.466335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.606 [2024-11-28 08:55:53.466341] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.606 [2024-11-28 08:55:53.466349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.606 [2024-11-28 08:55:53.466355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.606 [2024-11-28 08:55:53.466363] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:59.606 [2024-11-28 08:55:53.466369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.606 [2024-11-28 08:55:53.466378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.606 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:59.606 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:59.606 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:59.606 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:59.606 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:59.606 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:59.606 08:55:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.606 08:55:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:59.606 08:55:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.606 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:59.606 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:59.865 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:59.865 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:59.865 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:59.865 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:59.865 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:59.865 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:59.865 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:59.865 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:59.865 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:59.865 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:59.865 08:55:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:12.066 08:56:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:12.066 08:56:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:12.066 08:56:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:12.066 08:56:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:12.066 08:56:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:12.066 08:56:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:12.066 08:56:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.066 08:56:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.066 08:56:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.066 08:56:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:12.066 08:56:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:12.067 08:56:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:12.067 08:56:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:12.067 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:12.067 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:12.067 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:12.067 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:12.067 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:12.067 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:12.067 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:12.067 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:12.067 08:56:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.067 08:56:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.067 08:56:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.067 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:12.067 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:12.067 [2024-11-28 08:56:06.065266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:12.067 [2024-11-28 08:56:06.066477] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.067 [2024-11-28 08:56:06.066507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.067 [2024-11-28 08:56:06.066522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.067 [2024-11-28 08:56:06.066534] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.067 [2024-11-28 08:56:06.066545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.067 [2024-11-28 08:56:06.066552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.067 [2024-11-28 08:56:06.066561] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.067 [2024-11-28 08:56:06.066567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.067 [2024-11-28 08:56:06.066575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.067 [2024-11-28 08:56:06.066582] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.067 [2024-11-28 08:56:06.066590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.067 [2024-11-28 08:56:06.066596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.632 [2024-11-28 08:56:06.465267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:12.632 [2024-11-28 08:56:06.466024] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.632 [2024-11-28 08:56:06.466087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.632 [2024-11-28 08:56:06.466096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.632 [2024-11-28 08:56:06.466108] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.632 [2024-11-28 08:56:06.466115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.632 [2024-11-28 08:56:06.466124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.632 [2024-11-28 08:56:06.466131] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.632 [2024-11-28 08:56:06.466142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.632 [2024-11-28 08:56:06.466148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.632 [2024-11-28 08:56:06.466155] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.632 [2024-11-28 08:56:06.466162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.632 [2024-11-28 08:56:06.466172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.632 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:12.632 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:12.632 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:12.632 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:12.632 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:12.632 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:12.632 08:56:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.632 08:56:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.632 08:56:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.632 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:12.632 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:12.632 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:12.632 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:12.632 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:12.891 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:12.891 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:12.891 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:12.891 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:12.891 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:12.891 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:12.891 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:12.891 08:56:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:25.090 08:56:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:25.090 08:56:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:25.090 08:56:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:25.090 08:56:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:25.091 08:56:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:25.091 08:56:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.091 08:56:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:25.091 08:56:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.34 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.34 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:25.091 08:56:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.34 00:12:25.091 08:56:18 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.34 2 00:12:25.091 remove_attach_helper took 45.34s to complete (handling 2 nvme drive(s)) 08:56:18 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:25.091 08:56:18 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79531 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 79531 ']' 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 79531 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79531 00:12:25.091 killing process with pid 79531 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79531' 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@969 -- # kill 79531 00:12:25.091 08:56:18 sw_hotplug -- common/autotest_common.sh@974 -- # wait 79531 00:12:25.375 08:56:19 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:25.666 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:25.935 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:25.935 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:26.196 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:26.196 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:26.196 00:12:26.196 real 2m29.090s 00:12:26.196 user 1m49.422s 00:12:26.196 sys 0m18.163s 00:12:26.196 08:56:20 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.196 08:56:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.196 ************************************ 00:12:26.196 END TEST sw_hotplug 00:12:26.196 ************************************ 00:12:26.196 08:56:20 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:26.196 08:56:20 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:26.196 08:56:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:26.196 08:56:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.196 08:56:20 -- common/autotest_common.sh@10 -- # set +x 00:12:26.196 ************************************ 00:12:26.196 START TEST nvme_xnvme 00:12:26.196 ************************************ 00:12:26.196 08:56:20 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:26.459 * Looking for test storage... 00:12:26.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:26.459 08:56:20 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:26.459 08:56:20 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:26.459 08:56:20 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:26.459 08:56:20 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:26.459 08:56:20 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.459 08:56:20 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:26.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.459 --rc genhtml_branch_coverage=1 00:12:26.459 --rc genhtml_function_coverage=1 00:12:26.459 --rc genhtml_legend=1 00:12:26.459 --rc geninfo_all_blocks=1 00:12:26.459 --rc geninfo_unexecuted_blocks=1 00:12:26.459 00:12:26.459 ' 00:12:26.459 08:56:20 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:26.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.459 --rc genhtml_branch_coverage=1 00:12:26.459 --rc genhtml_function_coverage=1 00:12:26.459 --rc genhtml_legend=1 00:12:26.459 --rc geninfo_all_blocks=1 00:12:26.459 --rc geninfo_unexecuted_blocks=1 00:12:26.459 00:12:26.459 ' 00:12:26.459 08:56:20 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:26.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.459 --rc genhtml_branch_coverage=1 00:12:26.459 --rc genhtml_function_coverage=1 00:12:26.459 --rc genhtml_legend=1 00:12:26.459 --rc geninfo_all_blocks=1 00:12:26.459 --rc geninfo_unexecuted_blocks=1 00:12:26.459 00:12:26.459 ' 00:12:26.459 08:56:20 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:26.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.459 --rc genhtml_branch_coverage=1 00:12:26.459 --rc genhtml_function_coverage=1 00:12:26.459 --rc genhtml_legend=1 00:12:26.459 --rc geninfo_all_blocks=1 00:12:26.459 --rc geninfo_unexecuted_blocks=1 00:12:26.459 00:12:26.459 ' 00:12:26.459 08:56:20 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.459 08:56:20 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.459 08:56:20 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.459 08:56:20 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.459 08:56:20 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.459 08:56:20 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:26.459 08:56:20 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.459 08:56:20 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:26.459 08:56:20 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:26.459 08:56:20 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.459 08:56:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.459 ************************************ 00:12:26.459 START TEST xnvme_to_malloc_dd_copy 00:12:26.459 ************************************ 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:26.459 08:56:20 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:26.460 { 00:12:26.460 "subsystems": [ 00:12:26.460 { 00:12:26.460 "subsystem": "bdev", 00:12:26.460 "config": [ 00:12:26.460 { 00:12:26.460 "params": { 00:12:26.460 "block_size": 512, 00:12:26.460 "num_blocks": 2097152, 00:12:26.460 "name": "malloc0" 00:12:26.460 }, 00:12:26.460 "method": "bdev_malloc_create" 00:12:26.460 }, 00:12:26.460 { 00:12:26.460 "params": { 00:12:26.460 "io_mechanism": "libaio", 00:12:26.460 "filename": "/dev/nullb0", 00:12:26.460 "name": "null0" 00:12:26.460 }, 00:12:26.460 "method": "bdev_xnvme_create" 00:12:26.460 }, 00:12:26.460 { 00:12:26.460 "method": "bdev_wait_for_examine" 00:12:26.460 } 00:12:26.460 ] 00:12:26.460 } 00:12:26.460 ] 00:12:26.460 } 00:12:26.721 [2024-11-28 08:56:20.589025] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:26.721 [2024-11-28 08:56:20.589319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80901 ] 00:12:26.721 [2024-11-28 08:56:20.739762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.721 [2024-11-28 08:56:20.814271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.640  [2024-11-28T08:56:23.698Z] Copying: 219/1024 [MB] (219 MBps) [2024-11-28T08:56:24.631Z] Copying: 459/1024 [MB] (240 MBps) [2024-11-28T08:56:25.251Z] Copying: 763/1024 [MB] (304 MBps) [2024-11-28T08:56:25.816Z] Copying: 1024/1024 [MB] (average 265 MBps) 00:12:31.696 00:12:31.696 08:56:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:31.696 08:56:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:31.696 08:56:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:31.696 08:56:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:31.696 { 00:12:31.696 "subsystems": [ 00:12:31.696 { 00:12:31.696 "subsystem": "bdev", 00:12:31.696 "config": [ 00:12:31.696 { 00:12:31.696 "params": { 00:12:31.696 "block_size": 512, 00:12:31.696 "num_blocks": 2097152, 00:12:31.696 "name": "malloc0" 00:12:31.696 }, 00:12:31.696 "method": "bdev_malloc_create" 00:12:31.696 }, 00:12:31.696 { 00:12:31.696 "params": { 00:12:31.696 "io_mechanism": "libaio", 00:12:31.696 "filename": "/dev/nullb0", 00:12:31.696 "name": "null0" 00:12:31.696 }, 00:12:31.696 "method": "bdev_xnvme_create" 00:12:31.696 }, 00:12:31.696 { 00:12:31.696 "method": "bdev_wait_for_examine" 00:12:31.696 } 00:12:31.696 ] 00:12:31.696 } 00:12:31.696 ] 00:12:31.696 } 00:12:31.696 [2024-11-28 08:56:25.711150] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:31.696 [2024-11-28 08:56:25.711270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80966 ] 00:12:31.953 [2024-11-28 08:56:25.856544] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.953 [2024-11-28 08:56:25.906224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.326  [2024-11-28T08:56:28.381Z] Copying: 305/1024 [MB] (305 MBps) [2024-11-28T08:56:29.316Z] Copying: 611/1024 [MB] (306 MBps) [2024-11-28T08:56:29.575Z] Copying: 918/1024 [MB] (306 MBps) [2024-11-28T08:56:30.145Z] Copying: 1024/1024 [MB] (average 306 MBps) 00:12:36.025 00:12:36.025 08:56:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:36.025 08:56:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:36.025 08:56:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:36.025 08:56:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:36.025 08:56:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:36.025 08:56:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:36.025 { 00:12:36.025 "subsystems": [ 00:12:36.025 { 00:12:36.025 "subsystem": "bdev", 00:12:36.025 "config": [ 00:12:36.025 { 00:12:36.025 "params": { 00:12:36.025 "block_size": 512, 00:12:36.025 "num_blocks": 2097152, 00:12:36.025 "name": "malloc0" 00:12:36.025 }, 00:12:36.025 "method": "bdev_malloc_create" 00:12:36.025 }, 00:12:36.025 { 00:12:36.025 "params": { 00:12:36.025 "io_mechanism": "io_uring", 00:12:36.025 "filename": "/dev/nullb0", 00:12:36.025 "name": "null0" 00:12:36.025 }, 00:12:36.025 "method": "bdev_xnvme_create" 00:12:36.025 }, 00:12:36.025 { 00:12:36.025 "method": "bdev_wait_for_examine" 00:12:36.025 } 00:12:36.025 ] 00:12:36.025 } 00:12:36.025 ] 00:12:36.025 } 00:12:36.025 [2024-11-28 08:56:30.095572] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:36.025 [2024-11-28 08:56:30.095698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81020 ] 00:12:36.285 [2024-11-28 08:56:30.244147] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.285 [2024-11-28 08:56:30.298664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.661  [2024-11-28T08:56:32.715Z] Copying: 311/1024 [MB] (311 MBps) [2024-11-28T08:56:33.650Z] Copying: 623/1024 [MB] (312 MBps) [2024-11-28T08:56:33.909Z] Copying: 936/1024 [MB] (312 MBps) [2024-11-28T08:56:34.478Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:12:40.358 00:12:40.358 08:56:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:40.358 08:56:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:40.358 08:56:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:40.358 08:56:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:40.358 { 00:12:40.358 "subsystems": [ 00:12:40.358 { 00:12:40.358 "subsystem": "bdev", 00:12:40.358 "config": [ 00:12:40.358 { 00:12:40.358 "params": { 00:12:40.358 "block_size": 512, 00:12:40.358 "num_blocks": 2097152, 00:12:40.358 "name": "malloc0" 00:12:40.358 }, 00:12:40.358 "method": "bdev_malloc_create" 00:12:40.358 }, 00:12:40.358 { 00:12:40.358 "params": { 00:12:40.358 "io_mechanism": "io_uring", 00:12:40.358 "filename": "/dev/nullb0", 00:12:40.358 "name": "null0" 00:12:40.358 }, 00:12:40.358 "method": "bdev_xnvme_create" 00:12:40.358 }, 00:12:40.358 { 00:12:40.358 "method": "bdev_wait_for_examine" 00:12:40.358 } 00:12:40.358 ] 00:12:40.358 } 00:12:40.358 ] 00:12:40.358 } 00:12:40.358 [2024-11-28 08:56:34.389208] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:40.358 [2024-11-28 08:56:34.389515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81074 ] 00:12:40.617 [2024-11-28 08:56:34.539080] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.617 [2024-11-28 08:56:34.592955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.990  [2024-11-28T08:56:37.045Z] Copying: 316/1024 [MB] (316 MBps) [2024-11-28T08:56:37.981Z] Copying: 634/1024 [MB] (317 MBps) [2024-11-28T08:56:38.240Z] Copying: 952/1024 [MB] (318 MBps) [2024-11-28T08:56:38.810Z] Copying: 1024/1024 [MB] (average 317 MBps) 00:12:44.690 00:12:44.690 08:56:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:44.690 08:56:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:44.690 ************************************ 00:12:44.690 END TEST xnvme_to_malloc_dd_copy 00:12:44.690 ************************************ 00:12:44.690 00:12:44.690 real 0m18.106s 00:12:44.690 user 0m14.766s 00:12:44.690 sys 0m2.842s 00:12:44.690 08:56:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.690 08:56:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:44.690 08:56:38 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:44.690 08:56:38 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:44.690 08:56:38 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:44.690 08:56:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.690 ************************************ 00:12:44.690 START TEST xnvme_bdevperf 00:12:44.690 ************************************ 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:44.690 08:56:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:44.690 { 00:12:44.690 "subsystems": [ 00:12:44.690 { 00:12:44.690 "subsystem": "bdev", 00:12:44.690 "config": [ 00:12:44.690 { 00:12:44.690 "params": { 00:12:44.690 "io_mechanism": "libaio", 00:12:44.690 "filename": "/dev/nullb0", 00:12:44.690 "name": "null0" 00:12:44.690 }, 00:12:44.690 "method": "bdev_xnvme_create" 00:12:44.690 }, 00:12:44.691 { 00:12:44.691 "method": "bdev_wait_for_examine" 00:12:44.691 } 00:12:44.691 ] 00:12:44.691 } 00:12:44.691 ] 00:12:44.691 } 00:12:44.691 [2024-11-28 08:56:38.746088] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:44.691 [2024-11-28 08:56:38.746358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81152 ] 00:12:44.950 [2024-11-28 08:56:38.896110] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.950 [2024-11-28 08:56:38.955675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.950 Running I/O for 5 seconds... 00:12:47.261 207808.00 IOPS, 811.75 MiB/s [2024-11-28T08:56:42.317Z] 208192.00 IOPS, 813.25 MiB/s [2024-11-28T08:56:43.252Z] 208256.00 IOPS, 813.50 MiB/s [2024-11-28T08:56:44.187Z] 208288.00 IOPS, 813.62 MiB/s 00:12:50.067 Latency(us) 00:12:50.067 [2024-11-28T08:56:44.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.067 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:50.067 null0 : 5.00 208326.06 813.77 0.00 0.00 304.93 108.70 1524.97 00:12:50.067 [2024-11-28T08:56:44.187Z] =================================================================================================================== 00:12:50.067 [2024-11-28T08:56:44.187Z] Total : 208326.06 813.77 0.00 0.00 304.93 108.70 1524.97 00:12:50.328 08:56:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:50.328 08:56:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:50.328 08:56:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:50.328 08:56:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:50.328 08:56:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:50.328 08:56:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:50.328 { 00:12:50.328 "subsystems": [ 00:12:50.328 { 00:12:50.328 "subsystem": "bdev", 00:12:50.328 "config": [ 00:12:50.328 { 00:12:50.328 "params": { 00:12:50.328 "io_mechanism": "io_uring", 00:12:50.328 "filename": "/dev/nullb0", 00:12:50.328 "name": "null0" 00:12:50.328 }, 00:12:50.328 "method": "bdev_xnvme_create" 00:12:50.328 }, 00:12:50.328 { 00:12:50.328 "method": "bdev_wait_for_examine" 00:12:50.328 } 00:12:50.328 ] 00:12:50.328 } 00:12:50.328 ] 00:12:50.328 } 00:12:50.328 [2024-11-28 08:56:44.316351] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:50.328 [2024-11-28 08:56:44.316459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81221 ] 00:12:50.586 [2024-11-28 08:56:44.463724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.586 [2024-11-28 08:56:44.518934] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.586 Running I/O for 5 seconds... 00:12:52.902 238080.00 IOPS, 930.00 MiB/s [2024-11-28T08:56:47.656Z] 237952.00 IOPS, 929.50 MiB/s [2024-11-28T08:56:48.629Z] 237888.00 IOPS, 929.25 MiB/s [2024-11-28T08:56:50.007Z] 237840.00 IOPS, 929.06 MiB/s 00:12:55.887 Latency(us) 00:12:55.887 [2024-11-28T08:56:50.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.887 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:55.887 null0 : 5.00 237808.64 928.94 0.00 0.00 267.08 146.51 1524.97 00:12:55.887 [2024-11-28T08:56:50.007Z] =================================================================================================================== 00:12:55.887 [2024-11-28T08:56:50.007Z] Total : 237808.64 928.94 0.00 0.00 267.08 146.51 1524.97 00:12:55.887 08:56:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:55.887 08:56:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:55.887 00:12:55.887 real 0m11.175s 00:12:55.887 user 0m8.741s 00:12:55.887 sys 0m2.207s 00:12:55.887 ************************************ 00:12:55.887 END TEST xnvme_bdevperf 00:12:55.887 ************************************ 00:12:55.887 08:56:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.887 08:56:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:55.887 ************************************ 00:12:55.887 END TEST nvme_xnvme 00:12:55.887 ************************************ 00:12:55.887 00:12:55.887 real 0m29.570s 00:12:55.887 user 0m23.625s 00:12:55.887 sys 0m5.184s 00:12:55.887 08:56:49 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.887 08:56:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.887 08:56:49 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:55.887 08:56:49 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:55.887 08:56:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.887 08:56:49 -- common/autotest_common.sh@10 -- # set +x 00:12:55.887 ************************************ 00:12:55.887 START TEST blockdev_xnvme 00:12:55.887 ************************************ 00:12:55.887 08:56:49 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:56.146 * Looking for test storage... 00:12:56.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:56.146 08:56:50 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:56.146 08:56:50 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:56.146 08:56:50 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:56.146 08:56:50 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.146 08:56:50 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:56.147 08:56:50 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:56.147 08:56:50 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.147 08:56:50 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:56.147 08:56:50 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:56.147 08:56:50 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:56.147 08:56:50 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:56.147 08:56:50 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.147 08:56:50 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:56.147 08:56:50 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:56.147 08:56:50 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:56.147 08:56:50 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:56.147 08:56:50 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:56.147 08:56:50 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.147 08:56:50 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.147 --rc genhtml_branch_coverage=1 00:12:56.147 --rc genhtml_function_coverage=1 00:12:56.147 --rc genhtml_legend=1 00:12:56.147 --rc geninfo_all_blocks=1 00:12:56.147 --rc geninfo_unexecuted_blocks=1 00:12:56.147 00:12:56.147 ' 00:12:56.147 08:56:50 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.147 --rc genhtml_branch_coverage=1 00:12:56.147 --rc genhtml_function_coverage=1 00:12:56.147 --rc genhtml_legend=1 00:12:56.147 --rc geninfo_all_blocks=1 00:12:56.147 --rc geninfo_unexecuted_blocks=1 00:12:56.147 00:12:56.147 ' 00:12:56.147 08:56:50 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.147 --rc genhtml_branch_coverage=1 00:12:56.147 --rc genhtml_function_coverage=1 00:12:56.147 --rc genhtml_legend=1 00:12:56.147 --rc geninfo_all_blocks=1 00:12:56.147 --rc geninfo_unexecuted_blocks=1 00:12:56.147 00:12:56.147 ' 00:12:56.147 08:56:50 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.147 --rc genhtml_branch_coverage=1 00:12:56.147 --rc genhtml_function_coverage=1 00:12:56.147 --rc genhtml_legend=1 00:12:56.147 --rc geninfo_all_blocks=1 00:12:56.147 --rc geninfo_unexecuted_blocks=1 00:12:56.147 00:12:56.147 ' 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:56.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=81367 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 81367 00:12:56.147 08:56:50 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 81367 ']' 00:12:56.147 08:56:50 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.147 08:56:50 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:56.147 08:56:50 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.147 08:56:50 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:56.147 08:56:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.147 08:56:50 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:56.147 [2024-11-28 08:56:50.195872] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:56.147 [2024-11-28 08:56:50.196231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81367 ] 00:12:56.407 [2024-11-28 08:56:50.347871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.407 [2024-11-28 08:56:50.408635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.974 08:56:51 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:56.974 08:56:51 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:56.974 08:56:51 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:56.974 08:56:51 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:56.974 08:56:51 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:56.974 08:56:51 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:56.974 08:56:51 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:57.232 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:57.490 Waiting for block devices as requested 00:12:57.490 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:57.490 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:57.749 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:57.749 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:03.017 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:03.017 08:56:56 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:03.017 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:03.018 nvme0n1 00:13:03.018 nvme1n1 00:13:03.018 nvme2n1 00:13:03.018 nvme2n2 00:13:03.018 nvme2n3 00:13:03.018 nvme3n1 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.018 08:56:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.018 08:56:56 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:03.018 08:56:57 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.018 08:56:57 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:03.018 08:56:57 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:03.018 08:56:57 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "be0e98d1-042e-42ff-a24b-59a38e52fb08"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "be0e98d1-042e-42ff-a24b-59a38e52fb08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "85009075-96f9-415b-b424-ad25d3eb68e1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "85009075-96f9-415b-b424-ad25d3eb68e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "99f79037-384a-4b03-b65f-9221ae84c8c5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "99f79037-384a-4b03-b65f-9221ae84c8c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "006653db-cfa6-44a4-88df-8bc3258536aa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "006653db-cfa6-44a4-88df-8bc3258536aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "5cb5b510-b7b0-4127-b1ed-43e8ae2ce992"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5cb5b510-b7b0-4127-b1ed-43e8ae2ce992",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f403ace5-2fec-46d1-9d6a-15387d3490e7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f403ace5-2fec-46d1-9d6a-15387d3490e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:03.018 08:56:57 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:03.018 08:56:57 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:03.018 08:56:57 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:03.018 08:56:57 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 81367 00:13:03.018 08:56:57 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 81367 ']' 00:13:03.018 08:56:57 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 81367 00:13:03.018 08:56:57 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:03.018 08:56:57 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:03.018 08:56:57 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81367 00:13:03.018 killing process with pid 81367 00:13:03.018 08:56:57 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:03.018 08:56:57 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:03.018 08:56:57 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81367' 00:13:03.018 08:56:57 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 81367 00:13:03.018 08:56:57 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 81367 00:13:03.588 08:56:57 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:03.588 08:56:57 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:03.588 08:56:57 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:03.588 08:56:57 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.588 08:56:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.588 ************************************ 00:13:03.588 START TEST bdev_hello_world 00:13:03.588 ************************************ 00:13:03.588 08:56:57 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:03.588 [2024-11-28 08:56:57.469530] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:03.588 [2024-11-28 08:56:57.469653] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81714 ] 00:13:03.588 [2024-11-28 08:56:57.615657] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.588 [2024-11-28 08:56:57.665346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.847 [2024-11-28 08:56:57.842655] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:03.847 [2024-11-28 08:56:57.842699] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:03.847 [2024-11-28 08:56:57.842717] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:03.847 [2024-11-28 08:56:57.844408] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:03.847 [2024-11-28 08:56:57.844964] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:03.847 [2024-11-28 08:56:57.844985] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:03.847 [2024-11-28 08:56:57.845549] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:03.847 00:13:03.847 [2024-11-28 08:56:57.845592] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:04.108 ************************************ 00:13:04.108 END TEST bdev_hello_world 00:13:04.108 ************************************ 00:13:04.108 00:13:04.108 real 0m0.608s 00:13:04.108 user 0m0.312s 00:13:04.108 sys 0m0.179s 00:13:04.108 08:56:58 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:04.108 08:56:58 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:04.108 08:56:58 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:04.108 08:56:58 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:04.108 08:56:58 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.108 08:56:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.108 ************************************ 00:13:04.108 START TEST bdev_bounds 00:13:04.108 ************************************ 00:13:04.108 Process bdevio pid: 81734 00:13:04.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.108 08:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:04.108 08:56:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81734 00:13:04.108 08:56:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:04.108 08:56:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81734' 00:13:04.108 08:56:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81734 00:13:04.108 08:56:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:04.108 08:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 81734 ']' 00:13:04.108 08:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.108 08:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:04.108 08:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.108 08:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:04.108 08:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:04.108 [2024-11-28 08:56:58.141367] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:04.108 [2024-11-28 08:56:58.141493] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81734 ] 00:13:04.368 [2024-11-28 08:56:58.290569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:04.368 [2024-11-28 08:56:58.342001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.368 [2024-11-28 08:56:58.342266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.368 [2024-11-28 08:56:58.342317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.934 08:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:04.934 08:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:04.934 08:56:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:05.193 I/O targets: 00:13:05.193 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:05.193 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:05.193 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:05.193 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:05.193 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:05.193 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:05.193 00:13:05.193 00:13:05.193 CUnit - A unit testing framework for C - Version 2.1-3 00:13:05.193 http://cunit.sourceforge.net/ 00:13:05.193 00:13:05.193 00:13:05.193 Suite: bdevio tests on: nvme3n1 00:13:05.193 Test: blockdev write read block ...passed 00:13:05.193 Test: blockdev write zeroes read block ...passed 00:13:05.193 Test: blockdev write zeroes read no split ...passed 00:13:05.193 Test: blockdev write zeroes read split ...passed 00:13:05.193 Test: blockdev write zeroes read split partial ...passed 00:13:05.193 Test: blockdev reset ...passed 00:13:05.194 Test: blockdev write read 8 blocks ...passed 00:13:05.194 Test: blockdev write read size > 128k ...passed 00:13:05.194 Test: blockdev write read invalid size ...passed 00:13:05.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:05.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:05.194 Test: blockdev write read max offset ...passed 00:13:05.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:05.194 Test: blockdev writev readv 8 blocks ...passed 00:13:05.194 Test: blockdev writev readv 30 x 1block ...passed 00:13:05.194 Test: blockdev writev readv block ...passed 00:13:05.194 Test: blockdev writev readv size > 128k ...passed 00:13:05.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:05.194 Test: blockdev comparev and writev ...passed 00:13:05.194 Test: blockdev nvme passthru rw ...passed 00:13:05.194 Test: blockdev nvme passthru vendor specific ...passed 00:13:05.194 Test: blockdev nvme admin passthru ...passed 00:13:05.194 Test: blockdev copy ...passed 00:13:05.194 Suite: bdevio tests on: nvme2n3 00:13:05.194 Test: blockdev write read block ...passed 00:13:05.194 Test: blockdev write zeroes read block ...passed 00:13:05.194 Test: blockdev write zeroes read no split ...passed 00:13:05.194 Test: blockdev write zeroes read split ...passed 00:13:05.194 Test: blockdev write zeroes read split partial ...passed 00:13:05.194 Test: blockdev reset ...passed 00:13:05.194 Test: blockdev write read 8 blocks ...passed 00:13:05.194 Test: blockdev write read size > 128k ...passed 00:13:05.194 Test: blockdev write read invalid size ...passed 00:13:05.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:05.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:05.194 Test: blockdev write read max offset ...passed 00:13:05.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:05.194 Test: blockdev writev readv 8 blocks ...passed 00:13:05.194 Test: blockdev writev readv 30 x 1block ...passed 00:13:05.194 Test: blockdev writev readv block ...passed 00:13:05.194 Test: blockdev writev readv size > 128k ...passed 00:13:05.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:05.194 Test: blockdev comparev and writev ...passed 00:13:05.194 Test: blockdev nvme passthru rw ...passed 00:13:05.194 Test: blockdev nvme passthru vendor specific ...passed 00:13:05.194 Test: blockdev nvme admin passthru ...passed 00:13:05.194 Test: blockdev copy ...passed 00:13:05.194 Suite: bdevio tests on: nvme2n2 00:13:05.194 Test: blockdev write read block ...passed 00:13:05.194 Test: blockdev write zeroes read block ...passed 00:13:05.194 Test: blockdev write zeroes read no split ...passed 00:13:05.194 Test: blockdev write zeroes read split ...passed 00:13:05.194 Test: blockdev write zeroes read split partial ...passed 00:13:05.194 Test: blockdev reset ...passed 00:13:05.194 Test: blockdev write read 8 blocks ...passed 00:13:05.194 Test: blockdev write read size > 128k ...passed 00:13:05.194 Test: blockdev write read invalid size ...passed 00:13:05.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:05.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:05.194 Test: blockdev write read max offset ...passed 00:13:05.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:05.194 Test: blockdev writev readv 8 blocks ...passed 00:13:05.194 Test: blockdev writev readv 30 x 1block ...passed 00:13:05.194 Test: blockdev writev readv block ...passed 00:13:05.194 Test: blockdev writev readv size > 128k ...passed 00:13:05.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:05.194 Test: blockdev comparev and writev ...passed 00:13:05.194 Test: blockdev nvme passthru rw ...passed 00:13:05.194 Test: blockdev nvme passthru vendor specific ...passed 00:13:05.194 Test: blockdev nvme admin passthru ...passed 00:13:05.194 Test: blockdev copy ...passed 00:13:05.194 Suite: bdevio tests on: nvme2n1 00:13:05.194 Test: blockdev write read block ...passed 00:13:05.194 Test: blockdev write zeroes read block ...passed 00:13:05.194 Test: blockdev write zeroes read no split ...passed 00:13:05.194 Test: blockdev write zeroes read split ...passed 00:13:05.194 Test: blockdev write zeroes read split partial ...passed 00:13:05.194 Test: blockdev reset ...passed 00:13:05.194 Test: blockdev write read 8 blocks ...passed 00:13:05.194 Test: blockdev write read size > 128k ...passed 00:13:05.194 Test: blockdev write read invalid size ...passed 00:13:05.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:05.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:05.194 Test: blockdev write read max offset ...passed 00:13:05.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:05.194 Test: blockdev writev readv 8 blocks ...passed 00:13:05.194 Test: blockdev writev readv 30 x 1block ...passed 00:13:05.194 Test: blockdev writev readv block ...passed 00:13:05.194 Test: blockdev writev readv size > 128k ...passed 00:13:05.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:05.194 Test: blockdev comparev and writev ...passed 00:13:05.194 Test: blockdev nvme passthru rw ...passed 00:13:05.194 Test: blockdev nvme passthru vendor specific ...passed 00:13:05.194 Test: blockdev nvme admin passthru ...passed 00:13:05.194 Test: blockdev copy ...passed 00:13:05.194 Suite: bdevio tests on: nvme1n1 00:13:05.194 Test: blockdev write read block ...passed 00:13:05.194 Test: blockdev write zeroes read block ...passed 00:13:05.194 Test: blockdev write zeroes read no split ...passed 00:13:05.194 Test: blockdev write zeroes read split ...passed 00:13:05.194 Test: blockdev write zeroes read split partial ...passed 00:13:05.194 Test: blockdev reset ...passed 00:13:05.194 Test: blockdev write read 8 blocks ...passed 00:13:05.194 Test: blockdev write read size > 128k ...passed 00:13:05.194 Test: blockdev write read invalid size ...passed 00:13:05.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:05.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:05.194 Test: blockdev write read max offset ...passed 00:13:05.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:05.194 Test: blockdev writev readv 8 blocks ...passed 00:13:05.194 Test: blockdev writev readv 30 x 1block ...passed 00:13:05.194 Test: blockdev writev readv block ...passed 00:13:05.194 Test: blockdev writev readv size > 128k ...passed 00:13:05.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:05.194 Test: blockdev comparev and writev ...passed 00:13:05.194 Test: blockdev nvme passthru rw ...passed 00:13:05.194 Test: blockdev nvme passthru vendor specific ...passed 00:13:05.194 Test: blockdev nvme admin passthru ...passed 00:13:05.194 Test: blockdev copy ...passed 00:13:05.194 Suite: bdevio tests on: nvme0n1 00:13:05.194 Test: blockdev write read block ...passed 00:13:05.194 Test: blockdev write zeroes read block ...passed 00:13:05.194 Test: blockdev write zeroes read no split ...passed 00:13:05.194 Test: blockdev write zeroes read split ...passed 00:13:05.194 Test: blockdev write zeroes read split partial ...passed 00:13:05.194 Test: blockdev reset ...passed 00:13:05.194 Test: blockdev write read 8 blocks ...passed 00:13:05.194 Test: blockdev write read size > 128k ...passed 00:13:05.194 Test: blockdev write read invalid size ...passed 00:13:05.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:05.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:05.194 Test: blockdev write read max offset ...passed 00:13:05.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:05.194 Test: blockdev writev readv 8 blocks ...passed 00:13:05.194 Test: blockdev writev readv 30 x 1block ...passed 00:13:05.194 Test: blockdev writev readv block ...passed 00:13:05.194 Test: blockdev writev readv size > 128k ...passed 00:13:05.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:05.194 Test: blockdev comparev and writev ...passed 00:13:05.194 Test: blockdev nvme passthru rw ...passed 00:13:05.194 Test: blockdev nvme passthru vendor specific ...passed 00:13:05.194 Test: blockdev nvme admin passthru ...passed 00:13:05.194 Test: blockdev copy ...passed 00:13:05.194 00:13:05.194 Run Summary: Type Total Ran Passed Failed Inactive 00:13:05.194 suites 6 6 n/a 0 0 00:13:05.194 tests 138 138 138 0 0 00:13:05.194 asserts 780 780 780 0 n/a 00:13:05.194 00:13:05.194 Elapsed time = 0.419 seconds 00:13:05.194 0 00:13:05.194 08:56:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81734 00:13:05.194 08:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 81734 ']' 00:13:05.194 08:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 81734 00:13:05.194 08:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:05.194 08:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:05.194 08:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81734 00:13:05.194 killing process with pid 81734 00:13:05.194 08:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:05.194 08:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:05.194 08:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81734' 00:13:05.194 08:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 81734 00:13:05.194 08:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 81734 00:13:05.456 ************************************ 00:13:05.456 END TEST bdev_bounds 00:13:05.456 ************************************ 00:13:05.456 08:56:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:05.456 00:13:05.456 real 0m1.404s 00:13:05.456 user 0m3.382s 00:13:05.456 sys 0m0.302s 00:13:05.456 08:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.456 08:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:05.456 08:56:59 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:05.456 08:56:59 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:05.456 08:56:59 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.456 08:56:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:05.456 ************************************ 00:13:05.456 START TEST bdev_nbd 00:13:05.456 ************************************ 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:05.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81789 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81789 /var/tmp/spdk-nbd.sock 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 81789 ']' 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:05.456 08:56:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:05.718 [2024-11-28 08:56:59.622914] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:05.718 [2024-11-28 08:56:59.623156] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.718 [2024-11-28 08:56:59.773426] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.718 [2024-11-28 08:56:59.817968] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.661 1+0 records in 00:13:06.661 1+0 records out 00:13:06.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114953 s, 3.6 MB/s 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:06.661 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:06.923 08:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.923 1+0 records in 00:13:06.923 1+0 records out 00:13:06.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125921 s, 3.3 MB/s 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:06.923 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.186 1+0 records in 00:13:07.186 1+0 records out 00:13:07.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104355 s, 3.9 MB/s 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:07.186 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:07.447 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:07.447 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:07.447 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:07.447 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:07.447 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:07.447 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:07.447 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:07.447 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:07.447 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:07.447 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:07.447 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:07.447 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.447 1+0 records in 00:13:07.447 1+0 records out 00:13:07.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111338 s, 3.7 MB/s 00:13:07.447 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.447 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:07.448 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.448 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:07.448 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:07.448 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:07.448 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:07.448 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.709 1+0 records in 00:13:07.709 1+0 records out 00:13:07.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135254 s, 3.0 MB/s 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:07.709 08:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:07.970 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.971 1+0 records in 00:13:07.971 1+0 records out 00:13:07.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110935 s, 3.7 MB/s 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:07.971 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:08.232 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:08.232 { 00:13:08.232 "nbd_device": "/dev/nbd0", 00:13:08.232 "bdev_name": "nvme0n1" 00:13:08.232 }, 00:13:08.232 { 00:13:08.232 "nbd_device": "/dev/nbd1", 00:13:08.232 "bdev_name": "nvme1n1" 00:13:08.232 }, 00:13:08.232 { 00:13:08.232 "nbd_device": "/dev/nbd2", 00:13:08.232 "bdev_name": "nvme2n1" 00:13:08.232 }, 00:13:08.232 { 00:13:08.232 "nbd_device": "/dev/nbd3", 00:13:08.232 "bdev_name": "nvme2n2" 00:13:08.232 }, 00:13:08.232 { 00:13:08.232 "nbd_device": "/dev/nbd4", 00:13:08.232 "bdev_name": "nvme2n3" 00:13:08.232 }, 00:13:08.232 { 00:13:08.232 "nbd_device": "/dev/nbd5", 00:13:08.232 "bdev_name": "nvme3n1" 00:13:08.232 } 00:13:08.232 ]' 00:13:08.232 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:08.232 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:08.232 { 00:13:08.232 "nbd_device": "/dev/nbd0", 00:13:08.232 "bdev_name": "nvme0n1" 00:13:08.232 }, 00:13:08.232 { 00:13:08.232 "nbd_device": "/dev/nbd1", 00:13:08.232 "bdev_name": "nvme1n1" 00:13:08.232 }, 00:13:08.232 { 00:13:08.232 "nbd_device": "/dev/nbd2", 00:13:08.232 "bdev_name": "nvme2n1" 00:13:08.232 }, 00:13:08.232 { 00:13:08.232 "nbd_device": "/dev/nbd3", 00:13:08.232 "bdev_name": "nvme2n2" 00:13:08.232 }, 00:13:08.232 { 00:13:08.232 "nbd_device": "/dev/nbd4", 00:13:08.232 "bdev_name": "nvme2n3" 00:13:08.232 }, 00:13:08.232 { 00:13:08.232 "nbd_device": "/dev/nbd5", 00:13:08.232 "bdev_name": "nvme3n1" 00:13:08.232 } 00:13:08.232 ]' 00:13:08.232 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:08.232 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:08.232 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:08.232 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:08.232 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:08.232 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:08.232 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.232 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:08.493 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:08.493 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:08.493 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:08.493 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.493 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.493 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:08.493 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.493 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.493 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.493 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:08.755 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:08.755 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:08.755 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:08.755 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:08.755 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:08.755 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:08.755 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:08.755 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:08.755 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:08.755 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:09.015 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:09.015 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:09.015 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:09.015 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.015 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.015 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:09.015 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.015 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.015 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.015 08:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:09.277 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:09.277 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:09.277 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:09.277 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.277 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.277 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:09.277 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.277 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.277 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.277 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.535 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:09.793 08:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:10.051 /dev/nbd0 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.051 1+0 records in 00:13:10.051 1+0 records out 00:13:10.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498875 s, 8.2 MB/s 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:10.051 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:10.311 /dev/nbd1 00:13:10.311 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.312 1+0 records in 00:13:10.312 1+0 records out 00:13:10.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105602 s, 3.9 MB/s 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:10.312 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:10.573 /dev/nbd10 00:13:10.573 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:10.573 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.574 1+0 records in 00:13:10.574 1+0 records out 00:13:10.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127224 s, 3.2 MB/s 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:10.574 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:10.835 /dev/nbd11 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.835 1+0 records in 00:13:10.835 1+0 records out 00:13:10.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108156 s, 3.8 MB/s 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:10.835 08:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:11.098 /dev/nbd12 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.098 1+0 records in 00:13:11.098 1+0 records out 00:13:11.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108087 s, 3.8 MB/s 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:11.098 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:11.360 /dev/nbd13 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.360 1+0 records in 00:13:11.360 1+0 records out 00:13:11.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125754 s, 3.3 MB/s 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:11.360 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:11.621 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:11.621 { 00:13:11.621 "nbd_device": "/dev/nbd0", 00:13:11.621 "bdev_name": "nvme0n1" 00:13:11.621 }, 00:13:11.621 { 00:13:11.621 "nbd_device": "/dev/nbd1", 00:13:11.621 "bdev_name": "nvme1n1" 00:13:11.621 }, 00:13:11.621 { 00:13:11.621 "nbd_device": "/dev/nbd10", 00:13:11.622 "bdev_name": "nvme2n1" 00:13:11.622 }, 00:13:11.622 { 00:13:11.622 "nbd_device": "/dev/nbd11", 00:13:11.622 "bdev_name": "nvme2n2" 00:13:11.622 }, 00:13:11.622 { 00:13:11.622 "nbd_device": "/dev/nbd12", 00:13:11.622 "bdev_name": "nvme2n3" 00:13:11.622 }, 00:13:11.622 { 00:13:11.622 "nbd_device": "/dev/nbd13", 00:13:11.622 "bdev_name": "nvme3n1" 00:13:11.622 } 00:13:11.622 ]' 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:11.622 { 00:13:11.622 "nbd_device": "/dev/nbd0", 00:13:11.622 "bdev_name": "nvme0n1" 00:13:11.622 }, 00:13:11.622 { 00:13:11.622 "nbd_device": "/dev/nbd1", 00:13:11.622 "bdev_name": "nvme1n1" 00:13:11.622 }, 00:13:11.622 { 00:13:11.622 "nbd_device": "/dev/nbd10", 00:13:11.622 "bdev_name": "nvme2n1" 00:13:11.622 }, 00:13:11.622 { 00:13:11.622 "nbd_device": "/dev/nbd11", 00:13:11.622 "bdev_name": "nvme2n2" 00:13:11.622 }, 00:13:11.622 { 00:13:11.622 "nbd_device": "/dev/nbd12", 00:13:11.622 "bdev_name": "nvme2n3" 00:13:11.622 }, 00:13:11.622 { 00:13:11.622 "nbd_device": "/dev/nbd13", 00:13:11.622 "bdev_name": "nvme3n1" 00:13:11.622 } 00:13:11.622 ]' 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:11.622 /dev/nbd1 00:13:11.622 /dev/nbd10 00:13:11.622 /dev/nbd11 00:13:11.622 /dev/nbd12 00:13:11.622 /dev/nbd13' 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:11.622 /dev/nbd1 00:13:11.622 /dev/nbd10 00:13:11.622 /dev/nbd11 00:13:11.622 /dev/nbd12 00:13:11.622 /dev/nbd13' 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:11.622 256+0 records in 00:13:11.622 256+0 records out 00:13:11.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00724487 s, 145 MB/s 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:11.622 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:11.883 256+0 records in 00:13:11.883 256+0 records out 00:13:11.883 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.19979 s, 5.2 MB/s 00:13:11.883 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:11.883 08:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:12.145 256+0 records in 00:13:12.145 256+0 records out 00:13:12.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.266729 s, 3.9 MB/s 00:13:12.145 08:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.145 08:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:12.407 256+0 records in 00:13:12.407 256+0 records out 00:13:12.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.203418 s, 5.2 MB/s 00:13:12.407 08:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.407 08:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:12.407 256+0 records in 00:13:12.407 256+0 records out 00:13:12.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.202408 s, 5.2 MB/s 00:13:12.407 08:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.407 08:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:12.668 256+0 records in 00:13:12.668 256+0 records out 00:13:12.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.241499 s, 4.3 MB/s 00:13:12.668 08:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.668 08:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:12.928 256+0 records in 00:13:12.928 256+0 records out 00:13:12.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244187 s, 4.3 MB/s 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:12.928 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.187 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:13.446 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:13.446 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:13.446 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:13.446 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.446 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.446 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:13.446 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.446 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.446 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.446 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:13.704 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:13.704 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:13.704 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:13.704 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.704 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.704 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:13.704 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.704 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.704 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.704 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:13.962 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:13.963 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:13.963 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:13.963 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.963 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.963 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:13.963 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.963 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.963 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.963 08:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.221 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:14.483 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:14.741 malloc_lvol_verify 00:13:14.741 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:14.999 e402337d-9b09-4234-8110-d07841d849e8 00:13:14.999 08:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:15.257 624972c8-7de1-41e3-8dbf-5dbfe52ee9ab 00:13:15.257 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:15.257 /dev/nbd0 00:13:15.257 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:15.257 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:15.257 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:15.257 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:15.257 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:15.257 mke2fs 1.47.0 (5-Feb-2023) 00:13:15.257 Discarding device blocks: 0/4096 done 00:13:15.257 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:15.257 00:13:15.257 Allocating group tables: 0/1 done 00:13:15.257 Writing inode tables: 0/1 done 00:13:15.257 Creating journal (1024 blocks): done 00:13:15.257 Writing superblocks and filesystem accounting information: 0/1 done 00:13:15.257 00:13:15.257 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:15.257 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:15.257 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:15.257 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:15.257 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:15.258 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.258 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81789 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 81789 ']' 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 81789 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81789 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:15.517 killing process with pid 81789 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81789' 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 81789 00:13:15.517 08:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 81789 00:13:15.777 08:57:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:15.777 00:13:15.777 real 0m10.272s 00:13:15.777 user 0m13.867s 00:13:15.777 sys 0m3.780s 00:13:15.777 ************************************ 00:13:15.777 END TEST bdev_nbd 00:13:15.777 ************************************ 00:13:15.777 08:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:15.777 08:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:15.777 08:57:09 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:15.777 08:57:09 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:15.777 08:57:09 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:15.777 08:57:09 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:15.777 08:57:09 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:15.777 08:57:09 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.777 08:57:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.777 ************************************ 00:13:15.777 START TEST bdev_fio 00:13:15.777 ************************************ 00:13:15.777 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:15.777 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:15.777 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:15.777 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:15.777 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:15.777 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:15.777 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:16.036 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:16.037 ************************************ 00:13:16.037 START TEST bdev_fio_rw_verify 00:13:16.037 ************************************ 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:16.037 08:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.037 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.037 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.037 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.037 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.037 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.037 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.037 fio-3.35 00:13:16.037 Starting 6 threads 00:13:28.271 00:13:28.271 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=82186: Thu Nov 28 08:57:20 2024 00:13:28.271 read: IOPS=13.2k, BW=51.6MiB/s (54.1MB/s)(516MiB/10002msec) 00:13:28.271 slat (usec): min=2, max=1920, avg= 6.54, stdev=11.67 00:13:28.271 clat (usec): min=92, max=8429, avg=1490.45, stdev=789.68 00:13:28.271 lat (usec): min=96, max=8441, avg=1496.99, stdev=790.13 00:13:28.271 clat percentiles (usec): 00:13:28.271 | 50.000th=[ 1385], 99.000th=[ 3851], 99.900th=[ 5669], 99.990th=[ 8160], 00:13:28.271 | 99.999th=[ 8455] 00:13:28.271 write: IOPS=13.5k, BW=52.5MiB/s (55.1MB/s)(526MiB/10002msec); 0 zone resets 00:13:28.271 slat (usec): min=12, max=4136, avg=42.15, stdev=147.91 00:13:28.271 clat (usec): min=91, max=16411, avg=1775.26, stdev=957.60 00:13:28.271 lat (usec): min=108, max=16443, avg=1817.41, stdev=969.12 00:13:28.271 clat percentiles (usec): 00:13:28.271 | 50.000th=[ 1614], 99.000th=[ 5014], 99.900th=[ 7373], 99.990th=[11731], 00:13:28.271 | 99.999th=[13960] 00:13:28.271 bw ( KiB/s): min=44036, max=76203, per=100.00%, avg=54119.37, stdev=1642.67, samples=114 00:13:28.271 iops : min=11006, max=19050, avg=13529.05, stdev=410.67, samples=114 00:13:28.271 lat (usec) : 100=0.01%, 250=1.20%, 500=4.27%, 750=7.39%, 1000=10.61% 00:13:28.271 lat (msec) : 2=49.62%, 4=25.09%, 10=1.81%, 20=0.01% 00:13:28.271 cpu : usr=45.56%, sys=31.28%, ctx=4798, majf=0, minf=13854 00:13:28.271 IO depths : 1=11.2%, 2=23.5%, 4=51.3%, 8=14.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:28.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.271 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.271 issued rwts: total=132142,134541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.271 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:28.271 00:13:28.271 Run status group 0 (all jobs): 00:13:28.271 READ: bw=51.6MiB/s (54.1MB/s), 51.6MiB/s-51.6MiB/s (54.1MB/s-54.1MB/s), io=516MiB (541MB), run=10002-10002msec 00:13:28.271 WRITE: bw=52.5MiB/s (55.1MB/s), 52.5MiB/s-52.5MiB/s (55.1MB/s-55.1MB/s), io=526MiB (551MB), run=10002-10002msec 00:13:28.271 ----------------------------------------------------- 00:13:28.271 Suppressions used: 00:13:28.271 count bytes template 00:13:28.271 6 48 /usr/src/fio/parse.c 00:13:28.271 2312 221952 /usr/src/fio/iolog.c 00:13:28.271 1 8 libtcmalloc_minimal.so 00:13:28.271 1 904 libcrypto.so 00:13:28.271 ----------------------------------------------------- 00:13:28.271 00:13:28.271 ************************************ 00:13:28.271 END TEST bdev_fio_rw_verify 00:13:28.271 ************************************ 00:13:28.271 00:13:28.271 real 0m11.197s 00:13:28.271 user 0m28.124s 00:13:28.271 sys 0m19.078s 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:28.271 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:28.272 08:57:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "be0e98d1-042e-42ff-a24b-59a38e52fb08"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "be0e98d1-042e-42ff-a24b-59a38e52fb08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "85009075-96f9-415b-b424-ad25d3eb68e1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "85009075-96f9-415b-b424-ad25d3eb68e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "99f79037-384a-4b03-b65f-9221ae84c8c5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "99f79037-384a-4b03-b65f-9221ae84c8c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "006653db-cfa6-44a4-88df-8bc3258536aa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "006653db-cfa6-44a4-88df-8bc3258536aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "5cb5b510-b7b0-4127-b1ed-43e8ae2ce992"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5cb5b510-b7b0-4127-b1ed-43e8ae2ce992",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f403ace5-2fec-46d1-9d6a-15387d3490e7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f403ace5-2fec-46d1-9d6a-15387d3490e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:28.272 08:57:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:28.272 08:57:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:28.272 08:57:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.272 /home/vagrant/spdk_repo/spdk 00:13:28.272 08:57:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:28.272 08:57:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:28.272 08:57:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:28.272 00:13:28.272 real 0m11.367s 00:13:28.272 user 0m28.211s 00:13:28.272 sys 0m19.147s 00:13:28.272 ************************************ 00:13:28.272 END TEST bdev_fio 00:13:28.272 ************************************ 00:13:28.272 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.272 08:57:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:28.272 08:57:21 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:28.272 08:57:21 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:28.272 08:57:21 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:28.272 08:57:21 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.272 08:57:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.272 ************************************ 00:13:28.272 START TEST bdev_verify 00:13:28.272 ************************************ 00:13:28.272 08:57:21 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:28.272 [2024-11-28 08:57:21.390680] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:28.272 [2024-11-28 08:57:21.391033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82356 ] 00:13:28.272 [2024-11-28 08:57:21.543492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:28.272 [2024-11-28 08:57:21.617626] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.272 [2024-11-28 08:57:21.617730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.272 Running I/O for 5 seconds... 00:13:30.158 22830.00 IOPS, 89.18 MiB/s [2024-11-28T08:57:25.232Z] 23698.00 IOPS, 92.57 MiB/s [2024-11-28T08:57:26.175Z] 23162.33 IOPS, 90.48 MiB/s [2024-11-28T08:57:27.209Z] 23105.00 IOPS, 90.25 MiB/s [2024-11-28T08:57:27.209Z] 23164.60 IOPS, 90.49 MiB/s 00:13:33.089 Latency(us) 00:13:33.089 [2024-11-28T08:57:27.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.089 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:33.089 Verification LBA range: start 0x0 length 0xa0000 00:13:33.089 nvme0n1 : 5.05 1876.74 7.33 0.00 0.00 68081.30 6755.25 65334.35 00:13:33.089 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:33.089 Verification LBA range: start 0xa0000 length 0xa0000 00:13:33.089 nvme0n1 : 5.05 1699.09 6.64 0.00 0.00 75177.95 8721.33 78239.90 00:13:33.089 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:33.089 Verification LBA range: start 0x0 length 0xbd0bd 00:13:33.089 nvme1n1 : 5.05 2554.03 9.98 0.00 0.00 49823.53 5973.86 72190.42 00:13:33.089 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:33.089 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:33.089 nvme1n1 : 5.06 2428.28 9.49 0.00 0.00 52400.09 6200.71 57268.38 00:13:33.089 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:33.089 Verification LBA range: start 0x0 length 0x80000 00:13:33.089 nvme2n1 : 5.04 1931.40 7.54 0.00 0.00 65934.57 7360.20 59688.17 00:13:33.089 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:33.089 Verification LBA range: start 0x80000 length 0x80000 00:13:33.089 nvme2n1 : 5.04 1725.62 6.74 0.00 0.00 73587.39 10788.23 66140.95 00:13:33.089 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:33.089 Verification LBA range: start 0x0 length 0x80000 00:13:33.089 nvme2n2 : 5.04 1854.61 7.24 0.00 0.00 68584.03 14216.27 58074.98 00:13:33.089 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:33.089 Verification LBA range: start 0x80000 length 0x80000 00:13:33.089 nvme2n2 : 5.06 1696.53 6.63 0.00 0.00 74697.83 16434.41 62511.26 00:13:33.089 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:33.089 Verification LBA range: start 0x0 length 0x80000 00:13:33.089 nvme2n3 : 5.06 1873.43 7.32 0.00 0.00 67810.78 9981.64 60494.77 00:13:33.089 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:33.089 Verification LBA range: start 0x80000 length 0x80000 00:13:33.089 nvme2n3 : 5.07 1716.32 6.70 0.00 0.00 73779.11 3932.16 63317.86 00:13:33.089 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:33.089 Verification LBA range: start 0x0 length 0x20000 00:13:33.089 nvme3n1 : 5.06 1871.36 7.31 0.00 0.00 67774.44 3352.42 61301.37 00:13:33.089 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:33.089 Verification LBA range: start 0x20000 length 0x20000 00:13:33.089 nvme3n1 : 5.06 1693.82 6.62 0.00 0.00 74584.89 5721.80 78239.90 00:13:33.089 [2024-11-28T08:57:27.209Z] =================================================================================================================== 00:13:33.089 [2024-11-28T08:57:27.209Z] Total : 22921.24 89.54 0.00 0.00 66534.58 3352.42 78239.90 00:13:33.351 00:13:33.351 real 0m5.987s 00:13:33.351 user 0m9.363s 00:13:33.351 sys 0m1.584s 00:13:33.351 08:57:27 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.351 08:57:27 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:33.351 ************************************ 00:13:33.351 END TEST bdev_verify 00:13:33.351 ************************************ 00:13:33.351 08:57:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:33.351 08:57:27 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:33.351 08:57:27 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.351 08:57:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.351 ************************************ 00:13:33.351 START TEST bdev_verify_big_io 00:13:33.351 ************************************ 00:13:33.351 08:57:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:33.351 [2024-11-28 08:57:27.447076] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:33.351 [2024-11-28 08:57:27.447413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82444 ] 00:13:33.612 [2024-11-28 08:57:27.604451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:33.612 [2024-11-28 08:57:27.675207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.612 [2024-11-28 08:57:27.675326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.874 Running I/O for 5 seconds... 00:13:40.003 1048.00 IOPS, 65.50 MiB/s [2024-11-28T08:57:34.693Z] 2949.50 IOPS, 184.34 MiB/s 00:13:40.573 Latency(us) 00:13:40.573 [2024-11-28T08:57:34.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.573 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:40.573 Verification LBA range: start 0x0 length 0xa000 00:13:40.573 nvme0n1 : 5.82 98.94 6.18 0.00 0.00 1275098.80 18148.43 2039077.02 00:13:40.573 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:40.573 Verification LBA range: start 0xa000 length 0xa000 00:13:40.573 nvme0n1 : 5.86 111.91 6.99 0.00 0.00 1076294.47 49807.36 1703532.70 00:13:40.573 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:40.573 Verification LBA range: start 0x0 length 0xbd0b 00:13:40.573 nvme1n1 : 5.83 197.63 12.35 0.00 0.00 617478.83 11594.83 935652.43 00:13:40.573 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:40.573 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:40.573 nvme1n1 : 5.87 106.37 6.65 0.00 0.00 1068757.48 25407.80 1961643.72 00:13:40.573 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:40.573 Verification LBA range: start 0x0 length 0x8000 00:13:40.573 nvme2n1 : 5.82 101.66 6.35 0.00 0.00 1157097.18 34683.67 2439149.10 00:13:40.573 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:40.573 Verification LBA range: start 0x8000 length 0x8000 00:13:40.573 nvme2n1 : 6.02 94.28 5.89 0.00 0.00 1125805.53 56461.78 1206669.00 00:13:40.573 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:40.573 Verification LBA range: start 0x0 length 0x8000 00:13:40.573 nvme2n2 : 5.83 159.31 9.96 0.00 0.00 730230.72 6856.07 800144.15 00:13:40.573 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:40.573 Verification LBA range: start 0x8000 length 0x8000 00:13:40.573 nvme2n2 : 6.19 131.78 8.24 0.00 0.00 780438.43 10132.87 890483.00 00:13:40.573 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:40.573 Verification LBA range: start 0x0 length 0x8000 00:13:40.573 nvme2n3 : 5.83 131.71 8.23 0.00 0.00 859882.60 10637.00 1587382.74 00:13:40.573 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:40.573 Verification LBA range: start 0x8000 length 0x8000 00:13:40.573 nvme2n3 : 6.36 213.73 13.36 0.00 0.00 462629.10 5545.35 1568024.42 00:13:40.573 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:40.573 Verification LBA range: start 0x0 length 0x2000 00:13:40.573 nvme3n1 : 5.83 129.56 8.10 0.00 0.00 849727.89 10485.76 1703532.70 00:13:40.573 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:40.573 Verification LBA range: start 0x2000 length 0x2000 00:13:40.573 nvme3n1 : 6.59 207.02 12.94 0.00 0.00 461124.95 1077.56 4026531.84 00:13:40.573 [2024-11-28T08:57:34.693Z] =================================================================================================================== 00:13:40.573 [2024-11-28T08:57:34.693Z] Total : 1683.90 105.24 0.00 0.00 789229.42 1077.56 4026531.84 00:13:40.834 00:13:40.834 real 0m7.564s 00:13:40.834 user 0m13.788s 00:13:40.834 sys 0m0.534s 00:13:40.834 08:57:34 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.834 ************************************ 00:13:40.834 END TEST bdev_verify_big_io 00:13:40.834 ************************************ 00:13:40.834 08:57:34 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.096 08:57:34 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:41.096 08:57:34 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:41.096 08:57:35 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:41.096 08:57:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:41.096 ************************************ 00:13:41.096 START TEST bdev_write_zeroes 00:13:41.096 ************************************ 00:13:41.096 08:57:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:41.096 [2024-11-28 08:57:35.085098] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:41.096 [2024-11-28 08:57:35.085248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82548 ] 00:13:41.355 [2024-11-28 08:57:35.238098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.355 [2024-11-28 08:57:35.310766] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.615 Running I/O for 1 seconds... 00:13:42.560 85056.00 IOPS, 332.25 MiB/s 00:13:42.560 Latency(us) 00:13:42.560 [2024-11-28T08:57:36.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.560 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:42.560 nvme0n1 : 1.02 13608.50 53.16 0.00 0.00 9394.27 5469.74 19761.62 00:13:42.560 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:42.560 nvme1n1 : 1.02 16435.91 64.20 0.00 0.00 7761.04 4032.98 16736.89 00:13:42.560 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:42.560 nvme2n1 : 1.02 13588.78 53.08 0.00 0.00 9358.97 5494.94 19156.68 00:13:42.560 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:42.560 nvme2n2 : 1.02 13572.66 53.02 0.00 0.00 9337.23 5293.29 19055.85 00:13:42.560 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:42.560 nvme2n3 : 1.02 13556.62 52.96 0.00 0.00 9338.82 5268.09 18955.03 00:13:42.560 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:42.560 nvme3n1 : 1.02 13540.69 52.89 0.00 0.00 9343.82 5343.70 18753.38 00:13:42.560 [2024-11-28T08:57:36.680Z] =================================================================================================================== 00:13:42.560 [2024-11-28T08:57:36.680Z] Total : 84303.16 329.31 0.00 0.00 9043.03 4032.98 19761.62 00:13:42.821 00:13:42.821 real 0m1.896s 00:13:42.821 user 0m1.142s 00:13:42.821 sys 0m0.572s 00:13:42.821 ************************************ 00:13:42.821 END TEST bdev_write_zeroes 00:13:42.821 ************************************ 00:13:42.821 08:57:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.821 08:57:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:43.082 08:57:36 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:43.082 08:57:36 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:43.082 08:57:36 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.082 08:57:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.082 ************************************ 00:13:43.082 START TEST bdev_json_nonenclosed 00:13:43.082 ************************************ 00:13:43.082 08:57:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:43.082 [2024-11-28 08:57:37.052580] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:43.082 [2024-11-28 08:57:37.053141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82590 ] 00:13:43.343 [2024-11-28 08:57:37.206141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.343 [2024-11-28 08:57:37.277792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.343 [2024-11-28 08:57:37.277959] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:43.343 [2024-11-28 08:57:37.277980] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:43.343 [2024-11-28 08:57:37.277996] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:43.343 ************************************ 00:13:43.343 END TEST bdev_json_nonenclosed 00:13:43.343 ************************************ 00:13:43.343 00:13:43.343 real 0m0.428s 00:13:43.343 user 0m0.195s 00:13:43.343 sys 0m0.126s 00:13:43.343 08:57:37 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.343 08:57:37 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:43.604 08:57:37 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:43.604 08:57:37 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:43.604 08:57:37 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.604 08:57:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.604 ************************************ 00:13:43.604 START TEST bdev_json_nonarray 00:13:43.604 ************************************ 00:13:43.604 08:57:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:43.604 [2024-11-28 08:57:37.557527] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:43.604 [2024-11-28 08:57:37.557949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82616 ] 00:13:43.604 [2024-11-28 08:57:37.708529] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.866 [2024-11-28 08:57:37.781813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.866 [2024-11-28 08:57:37.781962] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:43.866 [2024-11-28 08:57:37.781981] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:43.866 [2024-11-28 08:57:37.781996] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:43.866 00:13:43.866 real 0m0.435s 00:13:43.866 user 0m0.190s 00:13:43.866 sys 0m0.138s 00:13:43.866 08:57:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.866 08:57:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:43.866 ************************************ 00:13:43.866 END TEST bdev_json_nonarray 00:13:43.866 ************************************ 00:13:43.866 08:57:37 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:43.866 08:57:37 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:43.866 08:57:37 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:43.866 08:57:37 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:43.866 08:57:37 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:43.866 08:57:37 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:43.866 08:57:37 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:43.866 08:57:37 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:43.866 08:57:37 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:43.866 08:57:37 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:43.866 08:57:37 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:43.866 08:57:37 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:44.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:47.745 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:48.004 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:48.004 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:48.004 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:48.004 00:13:48.004 real 0m52.104s 00:13:48.004 user 1m18.802s 00:13:48.004 sys 0m35.939s 00:13:48.004 ************************************ 00:13:48.004 END TEST blockdev_xnvme 00:13:48.004 ************************************ 00:13:48.004 08:57:42 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.005 08:57:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:48.005 08:57:42 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:48.005 08:57:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:48.005 08:57:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.005 08:57:42 -- common/autotest_common.sh@10 -- # set +x 00:13:48.264 ************************************ 00:13:48.264 START TEST ublk 00:13:48.264 ************************************ 00:13:48.264 08:57:42 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:48.264 * Looking for test storage... 00:13:48.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:48.264 08:57:42 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:48.264 08:57:42 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:13:48.264 08:57:42 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:48.264 08:57:42 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:48.264 08:57:42 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.264 08:57:42 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.264 08:57:42 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.264 08:57:42 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.264 08:57:42 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.264 08:57:42 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.264 08:57:42 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.264 08:57:42 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.264 08:57:42 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.264 08:57:42 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.264 08:57:42 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.264 08:57:42 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:48.264 08:57:42 ublk -- scripts/common.sh@345 -- # : 1 00:13:48.264 08:57:42 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.264 08:57:42 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.264 08:57:42 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:48.264 08:57:42 ublk -- scripts/common.sh@353 -- # local d=1 00:13:48.264 08:57:42 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.264 08:57:42 ublk -- scripts/common.sh@355 -- # echo 1 00:13:48.264 08:57:42 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.264 08:57:42 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:48.264 08:57:42 ublk -- scripts/common.sh@353 -- # local d=2 00:13:48.264 08:57:42 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.264 08:57:42 ublk -- scripts/common.sh@355 -- # echo 2 00:13:48.264 08:57:42 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.264 08:57:42 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.264 08:57:42 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.264 08:57:42 ublk -- scripts/common.sh@368 -- # return 0 00:13:48.264 08:57:42 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.264 08:57:42 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:48.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.264 --rc genhtml_branch_coverage=1 00:13:48.264 --rc genhtml_function_coverage=1 00:13:48.264 --rc genhtml_legend=1 00:13:48.264 --rc geninfo_all_blocks=1 00:13:48.264 --rc geninfo_unexecuted_blocks=1 00:13:48.264 00:13:48.264 ' 00:13:48.264 08:57:42 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:48.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.264 --rc genhtml_branch_coverage=1 00:13:48.264 --rc genhtml_function_coverage=1 00:13:48.264 --rc genhtml_legend=1 00:13:48.264 --rc geninfo_all_blocks=1 00:13:48.264 --rc geninfo_unexecuted_blocks=1 00:13:48.264 00:13:48.264 ' 00:13:48.264 08:57:42 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:48.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.264 --rc genhtml_branch_coverage=1 00:13:48.264 --rc genhtml_function_coverage=1 00:13:48.264 --rc genhtml_legend=1 00:13:48.264 --rc geninfo_all_blocks=1 00:13:48.264 --rc geninfo_unexecuted_blocks=1 00:13:48.264 00:13:48.264 ' 00:13:48.264 08:57:42 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:48.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.264 --rc genhtml_branch_coverage=1 00:13:48.264 --rc genhtml_function_coverage=1 00:13:48.264 --rc genhtml_legend=1 00:13:48.264 --rc geninfo_all_blocks=1 00:13:48.264 --rc geninfo_unexecuted_blocks=1 00:13:48.264 00:13:48.264 ' 00:13:48.264 08:57:42 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:48.264 08:57:42 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:48.264 08:57:42 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:48.264 08:57:42 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:48.264 08:57:42 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:48.264 08:57:42 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:48.264 08:57:42 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:48.264 08:57:42 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:48.264 08:57:42 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:48.264 08:57:42 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:48.264 08:57:42 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:48.264 08:57:42 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:48.264 08:57:42 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:48.264 08:57:42 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:48.264 08:57:42 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:48.264 08:57:42 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:48.264 08:57:42 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:48.264 08:57:42 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:48.264 08:57:42 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:48.264 08:57:42 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:48.264 08:57:42 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:48.264 08:57:42 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.264 08:57:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:48.264 ************************************ 00:13:48.264 START TEST test_save_ublk_config 00:13:48.264 ************************************ 00:13:48.264 08:57:42 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:48.264 08:57:42 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:48.264 08:57:42 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82901 00:13:48.264 08:57:42 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:48.265 08:57:42 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82901 00:13:48.265 08:57:42 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:48.265 08:57:42 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82901 ']' 00:13:48.265 08:57:42 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.265 08:57:42 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.265 08:57:42 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.265 08:57:42 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.265 08:57:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:48.523 [2024-11-28 08:57:42.411543] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:48.523 [2024-11-28 08:57:42.411708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82901 ] 00:13:48.523 [2024-11-28 08:57:42.566463] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.785 [2024-11-28 08:57:42.651779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.358 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.358 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:49.358 08:57:43 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:49.358 08:57:43 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:49.358 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.358 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:49.358 [2024-11-28 08:57:43.270825] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:49.358 [2024-11-28 08:57:43.271253] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:49.358 malloc0 00:13:49.358 [2024-11-28 08:57:43.310967] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:49.358 [2024-11-28 08:57:43.311092] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:49.358 [2024-11-28 08:57:43.311103] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:49.358 [2024-11-28 08:57:43.311118] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:49.358 [2024-11-28 08:57:43.319981] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:49.358 [2024-11-28 08:57:43.320023] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:49.358 [2024-11-28 08:57:43.326836] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:49.358 [2024-11-28 08:57:43.326972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:49.358 [2024-11-28 08:57:43.343835] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:49.358 0 00:13:49.358 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.358 08:57:43 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:49.358 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.358 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:49.620 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.620 08:57:43 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:49.620 "subsystems": [ 00:13:49.620 { 00:13:49.620 "subsystem": "fsdev", 00:13:49.620 "config": [ 00:13:49.620 { 00:13:49.620 "method": "fsdev_set_opts", 00:13:49.620 "params": { 00:13:49.620 "fsdev_io_pool_size": 65535, 00:13:49.620 "fsdev_io_cache_size": 256 00:13:49.620 } 00:13:49.620 } 00:13:49.620 ] 00:13:49.620 }, 00:13:49.620 { 00:13:49.620 "subsystem": "keyring", 00:13:49.620 "config": [] 00:13:49.620 }, 00:13:49.620 { 00:13:49.620 "subsystem": "iobuf", 00:13:49.620 "config": [ 00:13:49.620 { 00:13:49.620 "method": "iobuf_set_options", 00:13:49.620 "params": { 00:13:49.620 "small_pool_count": 8192, 00:13:49.620 "large_pool_count": 1024, 00:13:49.620 "small_bufsize": 8192, 00:13:49.620 "large_bufsize": 135168 00:13:49.620 } 00:13:49.620 } 00:13:49.620 ] 00:13:49.620 }, 00:13:49.620 { 00:13:49.620 "subsystem": "sock", 00:13:49.620 "config": [ 00:13:49.620 { 00:13:49.620 "method": "sock_set_default_impl", 00:13:49.620 "params": { 00:13:49.620 "impl_name": "posix" 00:13:49.620 } 00:13:49.620 }, 00:13:49.620 { 00:13:49.620 "method": "sock_impl_set_options", 00:13:49.620 "params": { 00:13:49.620 "impl_name": "ssl", 00:13:49.620 "recv_buf_size": 4096, 00:13:49.620 "send_buf_size": 4096, 00:13:49.620 "enable_recv_pipe": true, 00:13:49.620 "enable_quickack": false, 00:13:49.620 "enable_placement_id": 0, 00:13:49.620 "enable_zerocopy_send_server": true, 00:13:49.620 "enable_zerocopy_send_client": false, 00:13:49.620 "zerocopy_threshold": 0, 00:13:49.620 "tls_version": 0, 00:13:49.620 "enable_ktls": false 00:13:49.620 } 00:13:49.620 }, 00:13:49.620 { 00:13:49.620 "method": "sock_impl_set_options", 00:13:49.620 "params": { 00:13:49.620 "impl_name": "posix", 00:13:49.620 "recv_buf_size": 2097152, 00:13:49.620 "send_buf_size": 2097152, 00:13:49.620 "enable_recv_pipe": true, 00:13:49.620 "enable_quickack": false, 00:13:49.620 "enable_placement_id": 0, 00:13:49.620 "enable_zerocopy_send_server": true, 00:13:49.620 "enable_zerocopy_send_client": false, 00:13:49.620 "zerocopy_threshold": 0, 00:13:49.620 "tls_version": 0, 00:13:49.620 "enable_ktls": false 00:13:49.620 } 00:13:49.620 } 00:13:49.620 ] 00:13:49.620 }, 00:13:49.620 { 00:13:49.620 "subsystem": "vmd", 00:13:49.620 "config": [] 00:13:49.620 }, 00:13:49.620 { 00:13:49.620 "subsystem": "accel", 00:13:49.620 "config": [ 00:13:49.620 { 00:13:49.620 "method": "accel_set_options", 00:13:49.620 "params": { 00:13:49.620 "small_cache_size": 128, 00:13:49.620 "large_cache_size": 16, 00:13:49.620 "task_count": 2048, 00:13:49.620 "sequence_count": 2048, 00:13:49.620 "buf_count": 2048 00:13:49.620 } 00:13:49.620 } 00:13:49.620 ] 00:13:49.620 }, 00:13:49.620 { 00:13:49.620 "subsystem": "bdev", 00:13:49.620 "config": [ 00:13:49.620 { 00:13:49.620 "method": "bdev_set_options", 00:13:49.620 "params": { 00:13:49.620 "bdev_io_pool_size": 65535, 00:13:49.620 "bdev_io_cache_size": 256, 00:13:49.620 "bdev_auto_examine": true, 00:13:49.620 "iobuf_small_cache_size": 128, 00:13:49.620 "iobuf_large_cache_size": 16 00:13:49.620 } 00:13:49.620 }, 00:13:49.620 { 00:13:49.620 "method": "bdev_raid_set_options", 00:13:49.620 "params": { 00:13:49.620 "process_window_size_kb": 1024, 00:13:49.620 "process_max_bandwidth_mb_sec": 0 00:13:49.620 } 00:13:49.620 }, 00:13:49.620 { 00:13:49.620 "method": "bdev_iscsi_set_options", 00:13:49.620 "params": { 00:13:49.620 "timeout_sec": 30 00:13:49.620 } 00:13:49.620 }, 00:13:49.620 { 00:13:49.620 "method": "bdev_nvme_set_options", 00:13:49.620 "params": { 00:13:49.620 "action_on_timeout": "none", 00:13:49.620 "timeout_us": 0, 00:13:49.620 "timeout_admin_us": 0, 00:13:49.620 "keep_alive_timeout_ms": 10000, 00:13:49.620 "arbitration_burst": 0, 00:13:49.620 "low_priority_weight": 0, 00:13:49.620 "medium_priority_weight": 0, 00:13:49.620 "high_priority_weight": 0, 00:13:49.620 "nvme_adminq_poll_period_us": 10000, 00:13:49.620 "nvme_ioq_poll_period_us": 0, 00:13:49.620 "io_queue_requests": 0, 00:13:49.620 "delay_cmd_submit": true, 00:13:49.620 "transport_retry_count": 4, 00:13:49.620 "bdev_retry_count": 3, 00:13:49.620 "transport_ack_timeout": 0, 00:13:49.620 "ctrlr_loss_timeout_sec": 0, 00:13:49.620 "reconnect_delay_sec": 0, 00:13:49.620 "fast_io_fail_timeout_sec": 0, 00:13:49.620 "disable_auto_failback": false, 00:13:49.620 "generate_uuids": false, 00:13:49.620 "transport_tos": 0, 00:13:49.620 "nvme_error_stat": false, 00:13:49.620 "rdma_srq_size": 0, 00:13:49.620 "io_path_stat": false, 00:13:49.620 "allow_accel_sequence": false, 00:13:49.621 "rdma_max_cq_size": 0, 00:13:49.621 "rdma_cm_event_timeout_ms": 0, 00:13:49.621 "dhchap_digests": [ 00:13:49.621 "sha256", 00:13:49.621 "sha384", 00:13:49.621 "sha512" 00:13:49.621 ], 00:13:49.621 "dhchap_dhgroups": [ 00:13:49.621 "null", 00:13:49.621 "ffdhe2048", 00:13:49.621 "ffdhe3072", 00:13:49.621 "ffdhe4096", 00:13:49.621 "ffdhe6144", 00:13:49.621 "ffdhe8192" 00:13:49.621 ] 00:13:49.621 } 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "method": "bdev_nvme_set_hotplug", 00:13:49.621 "params": { 00:13:49.621 "period_us": 100000, 00:13:49.621 "enable": false 00:13:49.621 } 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "method": "bdev_malloc_create", 00:13:49.621 "params": { 00:13:49.621 "name": "malloc0", 00:13:49.621 "num_blocks": 8192, 00:13:49.621 "block_size": 4096, 00:13:49.621 "physical_block_size": 4096, 00:13:49.621 "uuid": "5cd7c14d-9f91-49b4-8152-57f16cd1a5cc", 00:13:49.621 "optimal_io_boundary": 0, 00:13:49.621 "md_size": 0, 00:13:49.621 "dif_type": 0, 00:13:49.621 "dif_is_head_of_md": false, 00:13:49.621 "dif_pi_format": 0 00:13:49.621 } 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "method": "bdev_wait_for_examine" 00:13:49.621 } 00:13:49.621 ] 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "subsystem": "scsi", 00:13:49.621 "config": null 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "subsystem": "scheduler", 00:13:49.621 "config": [ 00:13:49.621 { 00:13:49.621 "method": "framework_set_scheduler", 00:13:49.621 "params": { 00:13:49.621 "name": "static" 00:13:49.621 } 00:13:49.621 } 00:13:49.621 ] 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "subsystem": "vhost_scsi", 00:13:49.621 "config": [] 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "subsystem": "vhost_blk", 00:13:49.621 "config": [] 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "subsystem": "ublk", 00:13:49.621 "config": [ 00:13:49.621 { 00:13:49.621 "method": "ublk_create_target", 00:13:49.621 "params": { 00:13:49.621 "cpumask": "1" 00:13:49.621 } 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "method": "ublk_start_disk", 00:13:49.621 "params": { 00:13:49.621 "bdev_name": "malloc0", 00:13:49.621 "ublk_id": 0, 00:13:49.621 "num_queues": 1, 00:13:49.621 "queue_depth": 128 00:13:49.621 } 00:13:49.621 } 00:13:49.621 ] 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "subsystem": "nbd", 00:13:49.621 "config": [] 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "subsystem": "nvmf", 00:13:49.621 "config": [ 00:13:49.621 { 00:13:49.621 "method": "nvmf_set_config", 00:13:49.621 "params": { 00:13:49.621 "discovery_filter": "match_any", 00:13:49.621 "admin_cmd_passthru": { 00:13:49.621 "identify_ctrlr": false 00:13:49.621 }, 00:13:49.621 "dhchap_digests": [ 00:13:49.621 "sha256", 00:13:49.621 "sha384", 00:13:49.621 "sha512" 00:13:49.621 ], 00:13:49.621 "dhchap_dhgroups": [ 00:13:49.621 "null", 00:13:49.621 "ffdhe2048", 00:13:49.621 "ffdhe3072", 00:13:49.621 "ffdhe4096", 00:13:49.621 "ffdhe6144", 00:13:49.621 "ffdhe8192" 00:13:49.621 ] 00:13:49.621 } 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "method": "nvmf_set_max_subsystems", 00:13:49.621 "params": { 00:13:49.621 "max_subsystems": 1024 00:13:49.621 } 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "method": "nvmf_set_crdt", 00:13:49.621 "params": { 00:13:49.621 "crdt1": 0, 00:13:49.621 "crdt2": 0, 00:13:49.621 "crdt3": 0 00:13:49.621 } 00:13:49.621 } 00:13:49.621 ] 00:13:49.621 }, 00:13:49.621 { 00:13:49.621 "subsystem": "iscsi", 00:13:49.621 "config": [ 00:13:49.621 { 00:13:49.621 "method": "iscsi_set_options", 00:13:49.621 "params": { 00:13:49.621 "node_base": "iqn.2016-06.io.spdk", 00:13:49.621 "max_sessions": 128, 00:13:49.621 "max_connections_per_session": 2, 00:13:49.621 "max_queue_depth": 64, 00:13:49.621 "default_time2wait": 2, 00:13:49.621 "default_time2retain": 20, 00:13:49.621 "first_burst_length": 8192, 00:13:49.621 "immediate_data": true, 00:13:49.621 "allow_duplicated_isid": false, 00:13:49.621 "error_recovery_level": 0, 00:13:49.621 "nop_timeout": 60, 00:13:49.621 "nop_in_interval": 30, 00:13:49.621 "disable_chap": false, 00:13:49.621 "require_chap": false, 00:13:49.621 "mutual_chap": false, 00:13:49.621 "chap_group": 0, 00:13:49.621 "max_large_datain_per_connection": 64, 00:13:49.621 "max_r2t_per_connection": 4, 00:13:49.621 "pdu_pool_size": 36864, 00:13:49.621 "immediate_data_pool_size": 16384, 00:13:49.621 "data_out_pool_size": 2048 00:13:49.621 } 00:13:49.621 } 00:13:49.621 ] 00:13:49.621 } 00:13:49.621 ] 00:13:49.621 }' 00:13:49.621 08:57:43 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82901 00:13:49.621 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82901 ']' 00:13:49.621 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82901 00:13:49.621 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:49.621 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:49.621 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82901 00:13:49.621 killing process with pid 82901 00:13:49.621 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:49.621 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:49.621 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82901' 00:13:49.621 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82901 00:13:49.621 08:57:43 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82901 00:13:50.195 [2024-11-28 08:57:44.074739] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:50.195 [2024-11-28 08:57:44.118972] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:50.195 [2024-11-28 08:57:44.119128] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:50.195 [2024-11-28 08:57:44.126843] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:50.195 [2024-11-28 08:57:44.126919] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:50.195 [2024-11-28 08:57:44.126928] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:50.195 [2024-11-28 08:57:44.126965] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:50.195 [2024-11-28 08:57:44.127126] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:50.768 08:57:44 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82945 00:13:50.768 08:57:44 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82945 00:13:50.768 08:57:44 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82945 ']' 00:13:50.768 08:57:44 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:50.768 08:57:44 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.768 08:57:44 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.768 08:57:44 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.768 08:57:44 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.768 08:57:44 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:50.768 "subsystems": [ 00:13:50.768 { 00:13:50.768 "subsystem": "fsdev", 00:13:50.768 "config": [ 00:13:50.768 { 00:13:50.768 "method": "fsdev_set_opts", 00:13:50.768 "params": { 00:13:50.768 "fsdev_io_pool_size": 65535, 00:13:50.768 "fsdev_io_cache_size": 256 00:13:50.768 } 00:13:50.768 } 00:13:50.768 ] 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "subsystem": "keyring", 00:13:50.768 "config": [] 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "subsystem": "iobuf", 00:13:50.768 "config": [ 00:13:50.768 { 00:13:50.768 "method": "iobuf_set_options", 00:13:50.768 "params": { 00:13:50.768 "small_pool_count": 8192, 00:13:50.768 "large_pool_count": 1024, 00:13:50.768 "small_bufsize": 8192, 00:13:50.768 "large_bufsize": 135168 00:13:50.768 } 00:13:50.768 } 00:13:50.768 ] 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "subsystem": "sock", 00:13:50.768 "config": [ 00:13:50.768 { 00:13:50.768 "method": "sock_set_default_impl", 00:13:50.768 "params": { 00:13:50.768 "impl_name": "posix" 00:13:50.768 } 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "method": "sock_impl_set_options", 00:13:50.768 "params": { 00:13:50.768 "impl_name": "ssl", 00:13:50.768 "recv_buf_size": 4096, 00:13:50.768 "send_buf_size": 4096, 00:13:50.768 "enable_recv_pipe": true, 00:13:50.768 "enable_quickack": false, 00:13:50.768 "enable_placement_id": 0, 00:13:50.768 "enable_zerocopy_send_server": true, 00:13:50.768 "enable_zerocopy_send_client": false, 00:13:50.768 "zerocopy_threshold": 0, 00:13:50.768 "tls_version": 0, 00:13:50.768 "enable_ktls": false 00:13:50.768 } 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "method": "sock_impl_set_options", 00:13:50.768 "params": { 00:13:50.768 "impl_name": "posix", 00:13:50.768 "recv_buf_size": 2097152, 00:13:50.768 "send_buf_size": 2097152, 00:13:50.768 "enable_recv_pipe": true, 00:13:50.768 "enable_quickack": false, 00:13:50.768 "enable_placement_id": 0, 00:13:50.768 "enable_zerocopy_send_server": true, 00:13:50.768 "enable_zerocopy_send_client": false, 00:13:50.768 "zerocopy_threshold": 0, 00:13:50.768 "tls_version": 0, 00:13:50.768 "enable_ktls": false 00:13:50.768 } 00:13:50.768 } 00:13:50.768 ] 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "subsystem": "vmd", 00:13:50.768 "config": [] 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "subsystem": "accel", 00:13:50.768 "config": [ 00:13:50.768 { 00:13:50.768 "method": "accel_set_options", 00:13:50.768 "params": { 00:13:50.768 "small_cache_size": 128, 00:13:50.768 "large_cache_size": 16, 00:13:50.768 "task_count": 2048, 00:13:50.768 "sequence_count": 2048, 00:13:50.768 "buf_count": 2048 00:13:50.768 } 00:13:50.768 } 00:13:50.768 ] 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "subsystem": "bdev", 00:13:50.768 "config": [ 00:13:50.768 { 00:13:50.768 "method": "bdev_set_options", 00:13:50.768 "params": { 00:13:50.768 "bdev_io_pool_size": 65535, 00:13:50.768 "bdev_io_cache_size": 256, 00:13:50.768 "bdev_auto_examine": true, 00:13:50.768 "iobuf_small_cache_size": 128, 00:13:50.768 "iobuf_large_cache_size": 16 00:13:50.768 } 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "method": "bdev_raid_set_options", 00:13:50.768 "params": { 00:13:50.768 "process_window_size_kb": 1024, 00:13:50.768 "process_max_bandwidth_mb_sec": 0 00:13:50.768 } 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "method": "bdev_iscsi_set_options", 00:13:50.768 "params": { 00:13:50.768 "timeout_sec": 30 00:13:50.768 } 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "method": "bdev_nvme_set_options", 00:13:50.768 "params": { 00:13:50.768 "action_on_timeout": "none", 00:13:50.768 "timeout_us": 0, 00:13:50.768 "timeout_admin_us": 0, 00:13:50.768 "keep_alive_timeout_ms": 10000, 00:13:50.768 "arbitration_burst": 0, 00:13:50.768 "low_priority_weight": 0, 00:13:50.768 "medium_priority_weight": 0, 00:13:50.768 "high_priority_weight": 0, 00:13:50.768 "nvme_adminq_poll_period_us": 10000, 00:13:50.768 "nvme_ioq_poll_period_us": 0, 00:13:50.768 "io_queue_requests": 0, 00:13:50.768 "delay_cmd_submit": true, 00:13:50.768 "transport_retry_count": 4, 00:13:50.768 "bdev_retry_count": 3, 00:13:50.768 "transport_ack_timeout": 0, 00:13:50.768 "ctrlr_loss_timeout_sec": 0, 00:13:50.768 "reconnect_delay_sec": 0, 00:13:50.768 "fast_io_fail_timeout_sec": 0, 00:13:50.768 "disable_auto_failback": false, 00:13:50.768 "generate_uuids": false, 00:13:50.768 "transport_tos": 0, 00:13:50.768 "nvme_error_stat": false, 00:13:50.768 "rdma_srq_size": 0, 00:13:50.768 "io_path_stat": false, 00:13:50.768 "allow_accel_sequence": false, 00:13:50.768 "rdma_max_cq_size": 0, 00:13:50.768 "rdma_cm_event_timeout_ms": 0, 00:13:50.768 "dhchap_digests": [ 00:13:50.768 "sha256", 00:13:50.768 "sha384", 00:13:50.768 "sha512" 00:13:50.768 ], 00:13:50.768 "dhchap_dhgroups": [ 00:13:50.768 "null", 00:13:50.768 "ffdhe2048", 00:13:50.768 "ffdhe3072", 00:13:50.768 "ffdhe4096", 00:13:50.768 "ffdhe6144", 00:13:50.768 "ffdhe8192" 00:13:50.768 ] 00:13:50.768 } 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "method": "bdev_nvme_set_hotplug", 00:13:50.768 "params": { 00:13:50.768 "period_us": 100000, 00:13:50.768 "enable": false 00:13:50.768 } 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "method": "bdev_malloc_create", 00:13:50.768 "params": { 00:13:50.768 "name": "malloc0", 00:13:50.768 "num_blocks": 8192, 00:13:50.768 "block_size": 4096, 00:13:50.768 "physical_block_size": 4096, 00:13:50.768 "uuid": "5cd7c14d-9f91-49b4-8152-57f16cd1a5cc", 00:13:50.768 "optimal_io_boundary": 0, 00:13:50.768 "md_size": 0, 00:13:50.768 "dif_type": 0, 00:13:50.768 "dif_is_head_of_md": false, 00:13:50.768 "dif_pi_format": 0 00:13:50.768 } 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "method": "bdev_wait_for_examine" 00:13:50.768 } 00:13:50.768 ] 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "subsystem": "scsi", 00:13:50.768 "config": null 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "subsystem": "scheduler", 00:13:50.768 "config": [ 00:13:50.768 { 00:13:50.768 "method": "framework_set_scheduler", 00:13:50.768 "params": { 00:13:50.768 "name": "static" 00:13:50.768 } 00:13:50.768 } 00:13:50.768 ] 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "subsystem": "vhost_scsi", 00:13:50.768 "config": [] 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "subsystem": "vhost_blk", 00:13:50.768 "config": [] 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "subsystem": "ublk", 00:13:50.768 "config": [ 00:13:50.768 { 00:13:50.768 "method": "ublk_create_target", 00:13:50.768 "params": { 00:13:50.768 "cpumask": "1" 00:13:50.768 } 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "method": "ublk_start_disk", 00:13:50.768 "params": { 00:13:50.768 "bdev_name": "malloc0", 00:13:50.768 "ublk_id": 0, 00:13:50.768 "num_queues": 1, 00:13:50.768 "queue_depth": 128 00:13:50.768 } 00:13:50.768 } 00:13:50.768 ] 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "subsystem": "nbd", 00:13:50.768 "config": [] 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "subsystem": "nvmf", 00:13:50.768 "config": [ 00:13:50.768 { 00:13:50.768 "method": "nvmf_set_config", 00:13:50.768 "params": { 00:13:50.768 "discovery_filter": "match_any", 00:13:50.768 "admin_cmd_passthru": { 00:13:50.768 "identify_ctrlr": false 00:13:50.768 }, 00:13:50.768 "dhchap_digests": [ 00:13:50.768 "sha256", 00:13:50.768 "sha384", 00:13:50.768 "sha512" 00:13:50.768 ], 00:13:50.768 "dhchap_dhgroups": [ 00:13:50.768 "null", 00:13:50.768 "ffdhe2048", 00:13:50.768 "ffdhe3072", 00:13:50.768 "ffdhe4096", 00:13:50.768 "ffdhe6144", 00:13:50.768 "ffdhe8192" 00:13:50.768 ] 00:13:50.768 } 00:13:50.768 }, 00:13:50.768 { 00:13:50.768 "method": "nvmf_set_max_subsystems", 00:13:50.768 "params": { 00:13:50.769 "max_subsystems": 1024 00:13:50.769 } 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "method": "nvmf_set_crdt", 00:13:50.769 "params": { 00:13:50.769 "crdt1": 0, 00:13:50.769 "crdt2": 0, 00:13:50.769 "crdt3": 0 00:13:50.769 } 00:13:50.769 } 00:13:50.769 ] 00:13:50.769 }, 00:13:50.769 { 00:13:50.769 "subsystem": "iscsi", 00:13:50.769 "config": [ 00:13:50.769 { 00:13:50.769 "method": "iscsi_set_options", 00:13:50.769 "params": { 00:13:50.769 "node_base": "iqn.2016-06.io.spdk", 00:13:50.769 "max_sessions": 128, 00:13:50.769 "max_connections_per_session": 2, 00:13:50.769 "max_queue_depth": 64, 00:13:50.769 "default_time2wait": 2, 00:13:50.769 "default_time2retain": 20, 00:13:50.769 "first_burst_length": 8192, 00:13:50.769 "immediate_data": true, 00:13:50.769 "allow_duplicated_isid": false, 00:13:50.769 "error_recovery_level": 0, 00:13:50.769 "nop_timeout": 60, 00:13:50.769 "nop_in_interval": 30, 00:13:50.769 "disable_chap": false, 00:13:50.769 "require_chap": false, 00:13:50.769 "mutual_chap": false, 00:13:50.769 "chap_group": 0, 00:13:50.769 "max_large_datain_per_connection": 64, 00:13:50.769 "max_r2t_per_connection": 4, 00:13:50.769 "pdu_pool_size": 36864, 00:13:50.769 "immediate_data_pool_size": 16384, 00:13:50.769 "data_out_pool_size": 2048 00:13:50.769 } 00:13:50.769 } 00:13:50.769 ] 00:13:50.769 } 00:13:50.769 ] 00:13:50.769 }' 00:13:50.769 08:57:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:51.030 [2024-11-28 08:57:44.900980] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:51.030 [2024-11-28 08:57:44.901118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82945 ] 00:13:51.030 [2024-11-28 08:57:45.053890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.030 [2024-11-28 08:57:45.138816] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.602 [2024-11-28 08:57:45.610828] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:51.602 [2024-11-28 08:57:45.611260] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:51.602 [2024-11-28 08:57:45.618997] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:51.602 [2024-11-28 08:57:45.619107] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:51.602 [2024-11-28 08:57:45.619116] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:51.602 [2024-11-28 08:57:45.619126] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:51.602 [2024-11-28 08:57:45.627978] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:51.602 [2024-11-28 08:57:45.628014] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:51.602 [2024-11-28 08:57:45.634846] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:51.602 [2024-11-28 08:57:45.634981] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:51.602 [2024-11-28 08:57:45.651825] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:51.863 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.863 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:51.863 08:57:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:51.863 08:57:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:51.863 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.863 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:51.863 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.863 08:57:45 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:51.864 08:57:45 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:51.864 08:57:45 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82945 00:13:51.864 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82945 ']' 00:13:51.864 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82945 00:13:51.864 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:51.864 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.864 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82945 00:13:51.864 killing process with pid 82945 00:13:51.864 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:51.864 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:51.864 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82945' 00:13:51.864 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82945 00:13:51.864 08:57:45 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82945 00:13:52.125 [2024-11-28 08:57:46.242861] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:52.386 [2024-11-28 08:57:46.286974] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:52.386 [2024-11-28 08:57:46.287128] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:52.386 [2024-11-28 08:57:46.297848] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:52.386 [2024-11-28 08:57:46.297922] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:52.386 [2024-11-28 08:57:46.297933] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:52.386 [2024-11-28 08:57:46.297974] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:52.386 [2024-11-28 08:57:46.298143] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:52.956 08:57:46 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:52.956 ************************************ 00:13:52.956 END TEST test_save_ublk_config 00:13:52.956 ************************************ 00:13:52.956 00:13:52.956 real 0m4.657s 00:13:52.956 user 0m2.870s 00:13:52.956 sys 0m2.471s 00:13:52.956 08:57:46 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.956 08:57:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:52.957 08:57:47 ublk -- ublk/ublk.sh@139 -- # spdk_pid=83001 00:13:52.957 08:57:47 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:52.957 08:57:47 ublk -- ublk/ublk.sh@141 -- # waitforlisten 83001 00:13:52.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.957 08:57:47 ublk -- common/autotest_common.sh@831 -- # '[' -z 83001 ']' 00:13:52.957 08:57:47 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.957 08:57:47 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:52.957 08:57:47 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.957 08:57:47 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:52.957 08:57:47 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:52.957 08:57:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:53.215 [2024-11-28 08:57:47.084968] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:53.215 [2024-11-28 08:57:47.085064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83001 ] 00:13:53.215 [2024-11-28 08:57:47.227840] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:53.215 [2024-11-28 08:57:47.270736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.215 [2024-11-28 08:57:47.270793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.150 08:57:47 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.150 08:57:47 ublk -- common/autotest_common.sh@864 -- # return 0 00:13:54.150 08:57:47 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:54.150 08:57:47 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:54.150 08:57:47 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.150 08:57:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.151 ************************************ 00:13:54.151 START TEST test_create_ublk 00:13:54.151 ************************************ 00:13:54.151 08:57:47 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:13:54.151 08:57:47 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:54.151 08:57:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.151 08:57:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.151 [2024-11-28 08:57:47.938819] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:54.151 [2024-11-28 08:57:47.940054] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:54.151 08:57:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.151 08:57:47 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:54.151 08:57:47 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:54.151 08:57:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.151 08:57:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.151 08:57:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:54.151 08:57:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.151 08:57:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.151 [2024-11-28 08:57:48.014924] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:54.151 [2024-11-28 08:57:48.015256] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:54.151 [2024-11-28 08:57:48.015265] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:54.151 [2024-11-28 08:57:48.015272] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:54.151 [2024-11-28 08:57:48.022832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:54.151 [2024-11-28 08:57:48.022858] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:54.151 [2024-11-28 08:57:48.030818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:54.151 [2024-11-28 08:57:48.031335] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:54.151 [2024-11-28 08:57:48.052823] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:54.151 08:57:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:54.151 08:57:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.151 08:57:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.151 08:57:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:54.151 { 00:13:54.151 "ublk_device": "/dev/ublkb0", 00:13:54.151 "id": 0, 00:13:54.151 "queue_depth": 512, 00:13:54.151 "num_queues": 4, 00:13:54.151 "bdev_name": "Malloc0" 00:13:54.151 } 00:13:54.151 ]' 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:54.151 08:57:48 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:54.151 08:57:48 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:54.151 08:57:48 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:54.151 08:57:48 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:54.151 08:57:48 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:54.151 08:57:48 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:54.151 08:57:48 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:54.151 08:57:48 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:54.151 08:57:48 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:54.151 08:57:48 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:54.151 08:57:48 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:54.151 08:57:48 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:54.409 fio: verification read phase will never start because write phase uses all of runtime 00:13:54.409 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:54.409 fio-3.35 00:13:54.409 Starting 1 process 00:14:04.381 00:14:04.381 fio_test: (groupid=0, jobs=1): err= 0: pid=83040: Thu Nov 28 08:57:58 2024 00:14:04.381 write: IOPS=17.3k, BW=67.8MiB/s (71.1MB/s)(678MiB/10001msec); 0 zone resets 00:14:04.381 clat (usec): min=30, max=4163, avg=56.96, stdev=78.00 00:14:04.381 lat (usec): min=31, max=4163, avg=57.36, stdev=78.01 00:14:04.381 clat percentiles (usec): 00:14:04.381 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 51], 00:14:04.381 | 30.00th=[ 52], 40.00th=[ 53], 50.00th=[ 55], 60.00th=[ 56], 00:14:04.381 | 70.00th=[ 57], 80.00th=[ 58], 90.00th=[ 61], 95.00th=[ 63], 00:14:04.381 | 99.00th=[ 71], 99.50th=[ 74], 99.90th=[ 1237], 99.95th=[ 2343], 00:14:04.381 | 99.99th=[ 3261] 00:14:04.381 bw ( KiB/s): min=68312, max=73136, per=100.00%, avg=69488.84, stdev=1131.55, samples=19 00:14:04.381 iops : min=17078, max=18284, avg=17372.21, stdev=282.89, samples=19 00:14:04.381 lat (usec) : 50=12.61%, 100=87.20%, 250=0.06%, 500=0.01%, 750=0.01% 00:14:04.381 lat (usec) : 1000=0.01% 00:14:04.381 lat (msec) : 2=0.04%, 4=0.07%, 10=0.01% 00:14:04.381 cpu : usr=1.63%, sys=8.47%, ctx=173506, majf=0, minf=797 00:14:04.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.381 issued rwts: total=0,173505,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.381 00:14:04.381 Run status group 0 (all jobs): 00:14:04.381 WRITE: bw=67.8MiB/s (71.1MB/s), 67.8MiB/s-67.8MiB/s (71.1MB/s-71.1MB/s), io=678MiB (711MB), run=10001-10001msec 00:14:04.381 00:14:04.381 Disk stats (read/write): 00:14:04.381 ublkb0: ios=0/171737, merge=0/0, ticks=0/9014, in_queue=9014, util=99.08% 00:14:04.381 08:57:58 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:04.381 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.381 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.381 [2024-11-28 08:57:58.471507] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:04.640 [2024-11-28 08:57:58.508857] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:04.640 [2024-11-28 08:57:58.509606] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:04.641 [2024-11-28 08:57:58.519818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:04.641 [2024-11-28 08:57:58.520120] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:04.641 [2024-11-28 08:57:58.520129] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.641 08:57:58 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.641 [2024-11-28 08:57:58.526926] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:04.641 request: 00:14:04.641 { 00:14:04.641 "ublk_id": 0, 00:14:04.641 "method": "ublk_stop_disk", 00:14:04.641 "req_id": 1 00:14:04.641 } 00:14:04.641 Got JSON-RPC error response 00:14:04.641 response: 00:14:04.641 { 00:14:04.641 "code": -19, 00:14:04.641 "message": "No such device" 00:14:04.641 } 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:04.641 08:57:58 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.641 [2024-11-28 08:57:58.543885] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:04.641 [2024-11-28 08:57:58.545647] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:04.641 [2024-11-28 08:57:58.545676] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.641 08:57:58 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.641 08:57:58 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:04.641 08:57:58 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.641 08:57:58 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:04.641 08:57:58 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:04.641 08:57:58 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:04.641 08:57:58 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.641 08:57:58 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:04.641 08:57:58 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:04.641 ************************************ 00:14:04.641 END TEST test_create_ublk 00:14:04.641 ************************************ 00:14:04.641 08:57:58 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:04.641 00:14:04.641 real 0m10.783s 00:14:04.641 user 0m0.469s 00:14:04.641 sys 0m0.910s 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.641 08:57:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.641 08:57:58 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:04.641 08:57:58 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:04.641 08:57:58 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.641 08:57:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.641 ************************************ 00:14:04.641 START TEST test_create_multi_ublk 00:14:04.641 ************************************ 00:14:04.641 08:57:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:04.641 08:57:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:04.641 08:57:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.641 08:57:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.641 [2024-11-28 08:57:58.754818] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:04.641 [2024-11-28 08:57:58.755928] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:04.641 08:57:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.641 08:57:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:04.641 08:57:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.900 [2024-11-28 08:57:58.839230] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:04.900 [2024-11-28 08:57:58.839545] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:04.900 [2024-11-28 08:57:58.839554] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:04.900 [2024-11-28 08:57:58.839559] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:04.900 [2024-11-28 08:57:58.862828] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:04.900 [2024-11-28 08:57:58.862847] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:04.900 [2024-11-28 08:57:58.874825] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:04.900 [2024-11-28 08:57:58.875346] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:04.900 [2024-11-28 08:57:58.914826] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.900 08:57:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.900 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.900 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:04.900 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:04.900 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.900 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.159 [2024-11-28 08:57:59.018923] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:05.159 [2024-11-28 08:57:59.019242] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:05.159 [2024-11-28 08:57:59.019253] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:05.159 [2024-11-28 08:57:59.019260] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.159 [2024-11-28 08:57:59.030836] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.159 [2024-11-28 08:57:59.030856] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.159 [2024-11-28 08:57:59.042821] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.159 [2024-11-28 08:57:59.043333] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:05.159 [2024-11-28 08:57:59.082829] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:05.159 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.159 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:05.159 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.159 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:05.159 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.159 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.159 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.159 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:05.159 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:05.159 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.159 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.159 [2024-11-28 08:57:59.190918] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:05.159 [2024-11-28 08:57:59.191235] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:05.159 [2024-11-28 08:57:59.191249] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:05.159 [2024-11-28 08:57:59.191254] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.159 [2024-11-28 08:57:59.202851] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.160 [2024-11-28 08:57:59.202869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.160 [2024-11-28 08:57:59.214840] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.160 [2024-11-28 08:57:59.215355] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:05.160 [2024-11-28 08:57:59.221854] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:05.160 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.160 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:05.160 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.160 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:05.160 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.160 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.418 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.418 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:05.418 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:05.418 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.418 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.418 [2024-11-28 08:57:59.326916] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:05.418 [2024-11-28 08:57:59.327232] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:05.418 [2024-11-28 08:57:59.327244] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:05.418 [2024-11-28 08:57:59.327251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:05.418 [2024-11-28 08:57:59.338836] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:05.418 [2024-11-28 08:57:59.338858] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:05.418 [2024-11-28 08:57:59.350823] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:05.418 [2024-11-28 08:57:59.351332] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:05.418 [2024-11-28 08:57:59.386822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:05.418 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.418 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:05.418 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:05.418 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.418 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.418 08:57:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.418 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:05.418 { 00:14:05.418 "ublk_device": "/dev/ublkb0", 00:14:05.418 "id": 0, 00:14:05.418 "queue_depth": 512, 00:14:05.418 "num_queues": 4, 00:14:05.418 "bdev_name": "Malloc0" 00:14:05.418 }, 00:14:05.419 { 00:14:05.419 "ublk_device": "/dev/ublkb1", 00:14:05.419 "id": 1, 00:14:05.419 "queue_depth": 512, 00:14:05.419 "num_queues": 4, 00:14:05.419 "bdev_name": "Malloc1" 00:14:05.419 }, 00:14:05.419 { 00:14:05.419 "ublk_device": "/dev/ublkb2", 00:14:05.419 "id": 2, 00:14:05.419 "queue_depth": 512, 00:14:05.419 "num_queues": 4, 00:14:05.419 "bdev_name": "Malloc2" 00:14:05.419 }, 00:14:05.419 { 00:14:05.419 "ublk_device": "/dev/ublkb3", 00:14:05.419 "id": 3, 00:14:05.419 "queue_depth": 512, 00:14:05.419 "num_queues": 4, 00:14:05.419 "bdev_name": "Malloc3" 00:14:05.419 } 00:14:05.419 ]' 00:14:05.419 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:05.419 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.419 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:05.419 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:05.419 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:05.419 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:05.419 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:05.419 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:05.419 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:05.677 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:05.935 08:57:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:05.935 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:05.935 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:05.935 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:05.935 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:05.935 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:05.935 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.935 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:05.935 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.935 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.192 [2024-11-28 08:58:00.054883] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:06.192 [2024-11-28 08:58:00.088387] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:06.192 [2024-11-28 08:58:00.089507] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:06.192 [2024-11-28 08:58:00.094821] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:06.192 [2024-11-28 08:58:00.095075] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:06.192 [2024-11-28 08:58:00.095086] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.192 [2024-11-28 08:58:00.109919] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:06.192 [2024-11-28 08:58:00.145871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:06.192 [2024-11-28 08:58:00.146629] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:06.192 [2024-11-28 08:58:00.153836] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:06.192 [2024-11-28 08:58:00.154080] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:06.192 [2024-11-28 08:58:00.154091] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.192 [2024-11-28 08:58:00.169904] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:06.192 [2024-11-28 08:58:00.206846] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:06.192 [2024-11-28 08:58:00.207549] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:06.192 [2024-11-28 08:58:00.213818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:06.192 [2024-11-28 08:58:00.214063] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:06.192 [2024-11-28 08:58:00.214075] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.192 [2024-11-28 08:58:00.221878] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:06.192 [2024-11-28 08:58:00.252847] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:06.192 [2024-11-28 08:58:00.253470] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:06.192 [2024-11-28 08:58:00.260827] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:06.192 [2024-11-28 08:58:00.261063] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:06.192 [2024-11-28 08:58:00.261073] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.192 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:06.449 [2024-11-28 08:58:00.460881] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:06.449 [2024-11-28 08:58:00.462072] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:06.449 [2024-11-28 08:58:00.462103] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:06.449 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:06.449 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.449 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:06.449 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.449 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.449 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.449 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.449 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:06.449 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.449 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:06.707 08:58:00 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:06.965 08:58:00 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:06.965 08:58:00 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:06.965 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.965 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.965 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.965 08:58:00 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:06.965 08:58:00 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:06.965 ************************************ 00:14:06.965 END TEST test_create_multi_ublk 00:14:06.965 ************************************ 00:14:06.965 08:58:00 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:06.965 00:14:06.965 real 0m2.129s 00:14:06.965 user 0m0.814s 00:14:06.965 sys 0m0.131s 00:14:06.965 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.965 08:58:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:06.965 08:58:00 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:06.965 08:58:00 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:06.965 08:58:00 ublk -- ublk/ublk.sh@130 -- # killprocess 83001 00:14:06.965 08:58:00 ublk -- common/autotest_common.sh@950 -- # '[' -z 83001 ']' 00:14:06.965 08:58:00 ublk -- common/autotest_common.sh@954 -- # kill -0 83001 00:14:06.965 08:58:00 ublk -- common/autotest_common.sh@955 -- # uname 00:14:06.965 08:58:00 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.965 08:58:00 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83001 00:14:06.965 killing process with pid 83001 00:14:06.965 08:58:00 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:06.965 08:58:00 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:06.965 08:58:00 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83001' 00:14:06.965 08:58:00 ublk -- common/autotest_common.sh@969 -- # kill 83001 00:14:06.965 08:58:00 ublk -- common/autotest_common.sh@974 -- # wait 83001 00:14:07.222 [2024-11-28 08:58:01.157127] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:07.222 [2024-11-28 08:58:01.157183] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:07.483 00:14:07.483 real 0m19.423s 00:14:07.483 user 0m28.684s 00:14:07.483 sys 0m7.993s 00:14:07.483 ************************************ 00:14:07.483 END TEST ublk 00:14:07.483 ************************************ 00:14:07.483 08:58:01 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:07.483 08:58:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.483 08:58:01 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:07.483 08:58:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:07.483 08:58:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:07.483 08:58:01 -- common/autotest_common.sh@10 -- # set +x 00:14:07.483 ************************************ 00:14:07.483 START TEST ublk_recovery 00:14:07.483 ************************************ 00:14:07.483 08:58:01 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:07.744 * Looking for test storage... 00:14:07.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:07.744 08:58:01 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:07.744 08:58:01 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:07.744 08:58:01 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:07.744 08:58:01 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:07.744 08:58:01 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.745 08:58:01 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.745 08:58:01 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.745 08:58:01 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:07.745 08:58:01 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.745 08:58:01 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.745 --rc genhtml_branch_coverage=1 00:14:07.745 --rc genhtml_function_coverage=1 00:14:07.745 --rc genhtml_legend=1 00:14:07.745 --rc geninfo_all_blocks=1 00:14:07.745 --rc geninfo_unexecuted_blocks=1 00:14:07.745 00:14:07.745 ' 00:14:07.745 08:58:01 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.745 --rc genhtml_branch_coverage=1 00:14:07.745 --rc genhtml_function_coverage=1 00:14:07.745 --rc genhtml_legend=1 00:14:07.745 --rc geninfo_all_blocks=1 00:14:07.745 --rc geninfo_unexecuted_blocks=1 00:14:07.745 00:14:07.745 ' 00:14:07.745 08:58:01 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.745 --rc genhtml_branch_coverage=1 00:14:07.745 --rc genhtml_function_coverage=1 00:14:07.745 --rc genhtml_legend=1 00:14:07.745 --rc geninfo_all_blocks=1 00:14:07.745 --rc geninfo_unexecuted_blocks=1 00:14:07.745 00:14:07.745 ' 00:14:07.745 08:58:01 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.745 --rc genhtml_branch_coverage=1 00:14:07.745 --rc genhtml_function_coverage=1 00:14:07.745 --rc genhtml_legend=1 00:14:07.745 --rc geninfo_all_blocks=1 00:14:07.745 --rc geninfo_unexecuted_blocks=1 00:14:07.745 00:14:07.745 ' 00:14:07.745 08:58:01 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:07.745 08:58:01 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:07.745 08:58:01 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:07.745 08:58:01 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:07.745 08:58:01 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:07.745 08:58:01 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:07.745 08:58:01 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:07.745 08:58:01 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:07.745 08:58:01 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:07.745 08:58:01 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:07.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.745 08:58:01 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=83371 00:14:07.745 08:58:01 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:07.745 08:58:01 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 83371 00:14:07.745 08:58:01 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83371 ']' 00:14:07.745 08:58:01 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.745 08:58:01 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:07.745 08:58:01 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.745 08:58:01 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.745 08:58:01 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.745 08:58:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.745 [2024-11-28 08:58:01.823132] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:07.745 [2024-11-28 08:58:01.823248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83371 ] 00:14:08.004 [2024-11-28 08:58:01.968469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:08.004 [2024-11-28 08:58:02.020385] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.004 [2024-11-28 08:58:02.020474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.571 08:58:02 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:08.571 08:58:02 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:08.571 08:58:02 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:08.571 08:58:02 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.571 08:58:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.571 [2024-11-28 08:58:02.679819] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:08.571 [2024-11-28 08:58:02.681055] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:08.571 08:58:02 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.571 08:58:02 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:08.571 08:58:02 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.571 08:58:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.829 malloc0 00:14:08.829 08:58:02 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.829 08:58:02 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:08.829 08:58:02 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.829 08:58:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.829 [2024-11-28 08:58:02.719928] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:08.829 [2024-11-28 08:58:02.720022] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:08.829 [2024-11-28 08:58:02.720029] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:08.829 [2024-11-28 08:58:02.720037] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.829 [2024-11-28 08:58:02.727831] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.829 [2024-11-28 08:58:02.727854] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.829 [2024-11-28 08:58:02.735832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.829 [2024-11-28 08:58:02.735952] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:08.829 [2024-11-28 08:58:02.751825] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.829 1 00:14:08.829 08:58:02 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.829 08:58:02 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:09.856 08:58:03 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=83405 00:14:09.856 08:58:03 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:09.856 08:58:03 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:09.856 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:09.856 fio-3.35 00:14:09.856 Starting 1 process 00:14:15.132 08:58:08 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 83371 00:14:15.132 08:58:08 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:20.421 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 83371 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:20.421 08:58:13 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=83516 00:14:20.421 08:58:13 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:20.421 08:58:13 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 83516 00:14:20.421 08:58:13 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:20.421 08:58:13 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83516 ']' 00:14:20.421 08:58:13 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.421 08:58:13 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.421 08:58:13 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.421 08:58:13 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.421 08:58:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.421 [2024-11-28 08:58:13.835415] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:20.421 [2024-11-28 08:58:13.835680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83516 ] 00:14:20.421 [2024-11-28 08:58:13.976083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:20.421 [2024-11-28 08:58:14.023030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.421 [2024-11-28 08:58:14.023075] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.680 08:58:14 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:20.680 08:58:14 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:20.680 08:58:14 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:20.680 08:58:14 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.680 08:58:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.680 [2024-11-28 08:58:14.686817] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:20.680 [2024-11-28 08:58:14.688038] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:20.680 08:58:14 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.680 08:58:14 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:20.680 08:58:14 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.680 08:58:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.680 malloc0 00:14:20.680 08:58:14 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.680 08:58:14 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:20.680 08:58:14 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.680 08:58:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.680 [2024-11-28 08:58:14.726945] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:20.680 [2024-11-28 08:58:14.726980] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:20.680 [2024-11-28 08:58:14.726987] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:20.680 [2024-11-28 08:58:14.734853] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:20.680 [2024-11-28 08:58:14.734871] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:20.680 1 00:14:20.680 08:58:14 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.680 08:58:14 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 83405 00:14:22.053 [2024-11-28 08:58:15.734911] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:22.053 [2024-11-28 08:58:15.742820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:22.053 [2024-11-28 08:58:15.742838] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:22.988 [2024-11-28 08:58:16.742855] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:22.988 [2024-11-28 08:58:16.746832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:22.988 [2024-11-28 08:58:16.746843] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:23.922 [2024-11-28 08:58:17.746864] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:23.922 [2024-11-28 08:58:17.754831] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:23.922 [2024-11-28 08:58:17.754847] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:23.922 [2024-11-28 08:58:17.754853] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:23.922 [2024-11-28 08:58:17.754925] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:45.847 [2024-11-28 08:58:39.037820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:45.847 [2024-11-28 08:58:39.045223] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:45.847 [2024-11-28 08:58:39.053020] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:45.847 [2024-11-28 08:58:39.053107] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:12.379 00:15:12.379 fio_test: (groupid=0, jobs=1): err= 0: pid=83408: Thu Nov 28 08:59:04 2024 00:15:12.379 read: IOPS=13.9k, BW=54.4MiB/s (57.0MB/s)(3262MiB/60002msec) 00:15:12.379 slat (nsec): min=1284, max=249383, avg=5560.16, stdev=1481.92 00:15:12.379 clat (usec): min=768, max=30297k, avg=4186.46, stdev=243578.88 00:15:12.379 lat (usec): min=774, max=30297k, avg=4192.02, stdev=243578.88 00:15:12.379 clat percentiles (usec): 00:15:12.379 | 1.00th=[ 1860], 5.00th=[ 1991], 10.00th=[ 2024], 20.00th=[ 2040], 00:15:12.379 | 30.00th=[ 2073], 40.00th=[ 2073], 50.00th=[ 2089], 60.00th=[ 2114], 00:15:12.379 | 70.00th=[ 2114], 80.00th=[ 2147], 90.00th=[ 2212], 95.00th=[ 3195], 00:15:12.379 | 99.00th=[ 5276], 99.50th=[ 5604], 99.90th=[ 7570], 99.95th=[11994], 00:15:12.379 | 99.99th=[13435] 00:15:12.379 bw ( KiB/s): min=47840, max=116744, per=100.00%, avg=111441.22, stdev=12923.54, samples=59 00:15:12.379 iops : min=11960, max=29186, avg=27860.27, stdev=3230.88, samples=59 00:15:12.379 write: IOPS=13.9k, BW=54.3MiB/s (56.9MB/s)(3258MiB/60002msec); 0 zone resets 00:15:12.379 slat (nsec): min=1553, max=1284.6k, avg=5819.53, stdev=2428.88 00:15:12.379 clat (usec): min=763, max=30297k, avg=5003.01, stdev=285315.56 00:15:12.379 lat (usec): min=769, max=30297k, avg=5008.83, stdev=285315.56 00:15:12.379 clat percentiles (usec): 00:15:12.379 | 1.00th=[ 1926], 5.00th=[ 2089], 10.00th=[ 2114], 20.00th=[ 2147], 00:15:12.379 | 30.00th=[ 2180], 40.00th=[ 2180], 50.00th=[ 2212], 60.00th=[ 2212], 00:15:12.379 | 70.00th=[ 2245], 80.00th=[ 2245], 90.00th=[ 2311], 95.00th=[ 3130], 00:15:12.379 | 99.00th=[ 5276], 99.50th=[ 5735], 99.90th=[ 7701], 99.95th=[12387], 00:15:12.379 | 99.99th=[13566] 00:15:12.379 bw ( KiB/s): min=47968, max=116576, per=100.00%, avg=111312.54, stdev=12838.89, samples=59 00:15:12.379 iops : min=11992, max=29144, avg=27828.10, stdev=3209.71, samples=59 00:15:12.379 lat (usec) : 1000=0.01% 00:15:12.379 lat (msec) : 2=3.86%, 4=93.15%, 10=2.93%, 20=0.05%, >=2000=0.01% 00:15:12.379 cpu : usr=3.04%, sys=16.38%, ctx=54568, majf=0, minf=13 00:15:12.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:12.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:12.379 issued rwts: total=835199,834158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:12.379 00:15:12.379 Run status group 0 (all jobs): 00:15:12.379 READ: bw=54.4MiB/s (57.0MB/s), 54.4MiB/s-54.4MiB/s (57.0MB/s-57.0MB/s), io=3262MiB (3421MB), run=60002-60002msec 00:15:12.379 WRITE: bw=54.3MiB/s (56.9MB/s), 54.3MiB/s-54.3MiB/s (56.9MB/s-56.9MB/s), io=3258MiB (3417MB), run=60002-60002msec 00:15:12.379 00:15:12.379 Disk stats (read/write): 00:15:12.379 ublkb1: ios=832275/831216, merge=0/0, ticks=3441715/4045732, in_queue=7487447, util=99.89% 00:15:12.379 08:59:04 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:12.379 08:59:04 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.379 08:59:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.379 [2024-11-28 08:59:04.022702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:12.379 [2024-11-28 08:59:04.068894] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:12.379 [2024-11-28 08:59:04.069063] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:12.379 [2024-11-28 08:59:04.074833] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:12.379 [2024-11-28 08:59:04.074931] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:12.379 [2024-11-28 08:59:04.074942] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:12.379 08:59:04 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.379 08:59:04 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:12.379 08:59:04 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.379 08:59:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.380 [2024-11-28 08:59:04.089897] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:12.380 [2024-11-28 08:59:04.091063] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:12.380 [2024-11-28 08:59:04.091092] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:12.380 08:59:04 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.380 08:59:04 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:12.380 08:59:04 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:12.380 08:59:04 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 83516 00:15:12.380 08:59:04 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 83516 ']' 00:15:12.380 08:59:04 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 83516 00:15:12.380 08:59:04 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:12.380 08:59:04 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.380 08:59:04 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83516 00:15:12.380 killing process with pid 83516 00:15:12.380 08:59:04 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:12.380 08:59:04 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:12.380 08:59:04 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83516' 00:15:12.380 08:59:04 ublk_recovery -- common/autotest_common.sh@969 -- # kill 83516 00:15:12.380 08:59:04 ublk_recovery -- common/autotest_common.sh@974 -- # wait 83516 00:15:12.380 [2024-11-28 08:59:04.354374] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:12.380 [2024-11-28 08:59:04.354419] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:12.380 00:15:12.380 real 1m3.167s 00:15:12.380 user 1m42.645s 00:15:12.380 sys 0m24.770s 00:15:12.380 08:59:04 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:12.380 ************************************ 00:15:12.380 END TEST ublk_recovery 00:15:12.380 ************************************ 00:15:12.380 08:59:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.380 08:59:04 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:12.380 08:59:04 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:12.380 08:59:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:12.380 08:59:04 -- common/autotest_common.sh@10 -- # set +x 00:15:12.380 08:59:04 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:12.380 08:59:04 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:12.380 08:59:04 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:12.380 08:59:04 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:12.380 08:59:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:12.380 08:59:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:12.380 08:59:04 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:12.380 08:59:04 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:12.380 08:59:04 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:12.380 08:59:04 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:12.380 08:59:04 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:12.380 08:59:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:12.380 08:59:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:12.380 08:59:04 -- common/autotest_common.sh@10 -- # set +x 00:15:12.380 ************************************ 00:15:12.380 START TEST ftl 00:15:12.380 ************************************ 00:15:12.380 08:59:04 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:12.380 * Looking for test storage... 00:15:12.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:12.380 08:59:04 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:12.380 08:59:04 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:12.380 08:59:04 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:12.380 08:59:05 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:12.380 08:59:05 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.380 08:59:05 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.380 08:59:05 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.380 08:59:05 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.380 08:59:05 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.380 08:59:05 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.380 08:59:05 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.380 08:59:05 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.380 08:59:05 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.380 08:59:05 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.380 08:59:05 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.380 08:59:05 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:12.380 08:59:05 ftl -- scripts/common.sh@345 -- # : 1 00:15:12.380 08:59:05 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.380 08:59:05 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.380 08:59:05 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:12.380 08:59:05 ftl -- scripts/common.sh@353 -- # local d=1 00:15:12.380 08:59:05 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.380 08:59:05 ftl -- scripts/common.sh@355 -- # echo 1 00:15:12.380 08:59:05 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.380 08:59:05 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:12.380 08:59:05 ftl -- scripts/common.sh@353 -- # local d=2 00:15:12.380 08:59:05 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.380 08:59:05 ftl -- scripts/common.sh@355 -- # echo 2 00:15:12.380 08:59:05 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.380 08:59:05 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.380 08:59:05 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.380 08:59:05 ftl -- scripts/common.sh@368 -- # return 0 00:15:12.380 08:59:05 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.380 08:59:05 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:12.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.380 --rc genhtml_branch_coverage=1 00:15:12.380 --rc genhtml_function_coverage=1 00:15:12.380 --rc genhtml_legend=1 00:15:12.380 --rc geninfo_all_blocks=1 00:15:12.380 --rc geninfo_unexecuted_blocks=1 00:15:12.380 00:15:12.380 ' 00:15:12.380 08:59:05 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:12.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.380 --rc genhtml_branch_coverage=1 00:15:12.380 --rc genhtml_function_coverage=1 00:15:12.380 --rc genhtml_legend=1 00:15:12.380 --rc geninfo_all_blocks=1 00:15:12.380 --rc geninfo_unexecuted_blocks=1 00:15:12.380 00:15:12.380 ' 00:15:12.380 08:59:05 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:12.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.380 --rc genhtml_branch_coverage=1 00:15:12.380 --rc genhtml_function_coverage=1 00:15:12.380 --rc genhtml_legend=1 00:15:12.380 --rc geninfo_all_blocks=1 00:15:12.380 --rc geninfo_unexecuted_blocks=1 00:15:12.380 00:15:12.380 ' 00:15:12.380 08:59:05 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:12.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.380 --rc genhtml_branch_coverage=1 00:15:12.380 --rc genhtml_function_coverage=1 00:15:12.380 --rc genhtml_legend=1 00:15:12.380 --rc geninfo_all_blocks=1 00:15:12.380 --rc geninfo_unexecuted_blocks=1 00:15:12.380 00:15:12.380 ' 00:15:12.380 08:59:05 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:12.380 08:59:05 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:12.380 08:59:05 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:12.380 08:59:05 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:12.380 08:59:05 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:12.380 08:59:05 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:12.380 08:59:05 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.380 08:59:05 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:12.380 08:59:05 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:12.380 08:59:05 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:12.380 08:59:05 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:12.380 08:59:05 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:12.380 08:59:05 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:12.380 08:59:05 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:12.380 08:59:05 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:12.380 08:59:05 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:12.380 08:59:05 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:12.380 08:59:05 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:12.380 08:59:05 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:12.380 08:59:05 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:12.380 08:59:05 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:12.380 08:59:05 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:12.380 08:59:05 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:12.380 08:59:05 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:12.380 08:59:05 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:12.380 08:59:05 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:12.380 08:59:05 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:12.380 08:59:05 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:12.380 08:59:05 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:12.380 08:59:05 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.380 08:59:05 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:12.380 08:59:05 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:12.380 08:59:05 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:12.380 08:59:05 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:12.380 08:59:05 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:12.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:12.380 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:12.380 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:12.380 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:12.381 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:12.381 08:59:05 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=84315 00:15:12.381 08:59:05 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:12.381 08:59:05 ftl -- ftl/ftl.sh@38 -- # waitforlisten 84315 00:15:12.381 08:59:05 ftl -- common/autotest_common.sh@831 -- # '[' -z 84315 ']' 00:15:12.381 08:59:05 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.381 08:59:05 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.381 08:59:05 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.381 08:59:05 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.381 08:59:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:12.381 [2024-11-28 08:59:05.591643] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:12.381 [2024-11-28 08:59:05.591906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84315 ] 00:15:12.381 [2024-11-28 08:59:05.735487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.381 [2024-11-28 08:59:05.781784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.381 08:59:06 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.381 08:59:06 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:12.381 08:59:06 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:12.639 08:59:06 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:12.906 08:59:06 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:12.906 08:59:06 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:13.477 08:59:07 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:13.477 08:59:07 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:13.477 08:59:07 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:13.737 08:59:07 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:13.737 08:59:07 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:13.737 08:59:07 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:13.737 08:59:07 ftl -- ftl/ftl.sh@50 -- # break 00:15:13.737 08:59:07 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:13.737 08:59:07 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:13.737 08:59:07 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:13.737 08:59:07 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:13.737 08:59:07 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:13.737 08:59:07 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:13.737 08:59:07 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:13.737 08:59:07 ftl -- ftl/ftl.sh@63 -- # break 00:15:13.737 08:59:07 ftl -- ftl/ftl.sh@66 -- # killprocess 84315 00:15:13.737 08:59:07 ftl -- common/autotest_common.sh@950 -- # '[' -z 84315 ']' 00:15:13.737 08:59:07 ftl -- common/autotest_common.sh@954 -- # kill -0 84315 00:15:13.737 08:59:07 ftl -- common/autotest_common.sh@955 -- # uname 00:15:13.737 08:59:07 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:13.737 08:59:07 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84315 00:15:13.737 killing process with pid 84315 00:15:13.737 08:59:07 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:13.737 08:59:07 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:13.737 08:59:07 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84315' 00:15:13.737 08:59:07 ftl -- common/autotest_common.sh@969 -- # kill 84315 00:15:13.737 08:59:07 ftl -- common/autotest_common.sh@974 -- # wait 84315 00:15:14.308 08:59:08 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:14.308 08:59:08 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:14.308 08:59:08 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:14.308 08:59:08 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.308 08:59:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:14.308 ************************************ 00:15:14.308 START TEST ftl_fio_basic 00:15:14.308 ************************************ 00:15:14.308 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:14.308 * Looking for test storage... 00:15:14.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:14.308 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:14.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.309 --rc genhtml_branch_coverage=1 00:15:14.309 --rc genhtml_function_coverage=1 00:15:14.309 --rc genhtml_legend=1 00:15:14.309 --rc geninfo_all_blocks=1 00:15:14.309 --rc geninfo_unexecuted_blocks=1 00:15:14.309 00:15:14.309 ' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:14.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.309 --rc genhtml_branch_coverage=1 00:15:14.309 --rc genhtml_function_coverage=1 00:15:14.309 --rc genhtml_legend=1 00:15:14.309 --rc geninfo_all_blocks=1 00:15:14.309 --rc geninfo_unexecuted_blocks=1 00:15:14.309 00:15:14.309 ' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:14.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.309 --rc genhtml_branch_coverage=1 00:15:14.309 --rc genhtml_function_coverage=1 00:15:14.309 --rc genhtml_legend=1 00:15:14.309 --rc geninfo_all_blocks=1 00:15:14.309 --rc geninfo_unexecuted_blocks=1 00:15:14.309 00:15:14.309 ' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:14.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.309 --rc genhtml_branch_coverage=1 00:15:14.309 --rc genhtml_function_coverage=1 00:15:14.309 --rc genhtml_legend=1 00:15:14.309 --rc geninfo_all_blocks=1 00:15:14.309 --rc geninfo_unexecuted_blocks=1 00:15:14.309 00:15:14.309 ' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=84425 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 84425 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 84425 ']' 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.309 08:59:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:14.575 [2024-11-28 08:59:08.492050] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:14.575 [2024-11-28 08:59:08.492365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84425 ] 00:15:14.575 [2024-11-28 08:59:08.644288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:14.835 [2024-11-28 08:59:08.707746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.835 [2024-11-28 08:59:08.708057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.835 [2024-11-28 08:59:08.708126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.402 08:59:09 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.402 08:59:09 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:15.402 08:59:09 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:15.402 08:59:09 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:15.402 08:59:09 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:15.402 08:59:09 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:15.402 08:59:09 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:15.402 08:59:09 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:15.660 08:59:09 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:15.660 08:59:09 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:15.660 08:59:09 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:15.661 08:59:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:15.661 08:59:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:15.661 08:59:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:15.661 08:59:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:15.661 08:59:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:15.919 08:59:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:15.919 { 00:15:15.919 "name": "nvme0n1", 00:15:15.919 "aliases": [ 00:15:15.919 "15d4fd42-b3f2-4b8d-816b-1c9213a41be1" 00:15:15.919 ], 00:15:15.919 "product_name": "NVMe disk", 00:15:15.919 "block_size": 4096, 00:15:15.919 "num_blocks": 1310720, 00:15:15.919 "uuid": "15d4fd42-b3f2-4b8d-816b-1c9213a41be1", 00:15:15.919 "numa_id": -1, 00:15:15.919 "assigned_rate_limits": { 00:15:15.919 "rw_ios_per_sec": 0, 00:15:15.919 "rw_mbytes_per_sec": 0, 00:15:15.919 "r_mbytes_per_sec": 0, 00:15:15.919 "w_mbytes_per_sec": 0 00:15:15.919 }, 00:15:15.919 "claimed": false, 00:15:15.919 "zoned": false, 00:15:15.919 "supported_io_types": { 00:15:15.919 "read": true, 00:15:15.919 "write": true, 00:15:15.919 "unmap": true, 00:15:15.919 "flush": true, 00:15:15.919 "reset": true, 00:15:15.919 "nvme_admin": true, 00:15:15.919 "nvme_io": true, 00:15:15.919 "nvme_io_md": false, 00:15:15.919 "write_zeroes": true, 00:15:15.919 "zcopy": false, 00:15:15.919 "get_zone_info": false, 00:15:15.919 "zone_management": false, 00:15:15.919 "zone_append": false, 00:15:15.919 "compare": true, 00:15:15.919 "compare_and_write": false, 00:15:15.919 "abort": true, 00:15:15.919 "seek_hole": false, 00:15:15.919 "seek_data": false, 00:15:15.919 "copy": true, 00:15:15.919 "nvme_iov_md": false 00:15:15.919 }, 00:15:15.919 "driver_specific": { 00:15:15.919 "nvme": [ 00:15:15.919 { 00:15:15.919 "pci_address": "0000:00:11.0", 00:15:15.919 "trid": { 00:15:15.919 "trtype": "PCIe", 00:15:15.919 "traddr": "0000:00:11.0" 00:15:15.919 }, 00:15:15.919 "ctrlr_data": { 00:15:15.919 "cntlid": 0, 00:15:15.919 "vendor_id": "0x1b36", 00:15:15.919 "model_number": "QEMU NVMe Ctrl", 00:15:15.919 "serial_number": "12341", 00:15:15.919 "firmware_revision": "8.0.0", 00:15:15.919 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:15.919 "oacs": { 00:15:15.919 "security": 0, 00:15:15.919 "format": 1, 00:15:15.919 "firmware": 0, 00:15:15.919 "ns_manage": 1 00:15:15.919 }, 00:15:15.919 "multi_ctrlr": false, 00:15:15.919 "ana_reporting": false 00:15:15.919 }, 00:15:15.919 "vs": { 00:15:15.919 "nvme_version": "1.4" 00:15:15.919 }, 00:15:15.919 "ns_data": { 00:15:15.919 "id": 1, 00:15:15.919 "can_share": false 00:15:15.919 } 00:15:15.919 } 00:15:15.919 ], 00:15:15.919 "mp_policy": "active_passive" 00:15:15.919 } 00:15:15.919 } 00:15:15.919 ]' 00:15:15.919 08:59:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:15.919 08:59:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:15.919 08:59:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:15.919 08:59:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:15.919 08:59:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:15.919 08:59:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:15.919 08:59:09 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:15.919 08:59:09 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:15.919 08:59:09 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:15.919 08:59:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:15.919 08:59:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:16.177 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:16.177 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:16.177 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=e2897c94-efe7-4c92-8715-e856b15d3a9b 00:15:16.177 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e2897c94-efe7-4c92-8715-e856b15d3a9b 00:15:16.435 08:59:10 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=d9574a11-f4e9-4a2b-85d8-3a60613c8673 00:15:16.435 08:59:10 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d9574a11-f4e9-4a2b-85d8-3a60613c8673 00:15:16.435 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:16.436 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:16.436 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=d9574a11-f4e9-4a2b-85d8-3a60613c8673 00:15:16.436 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:16.436 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size d9574a11-f4e9-4a2b-85d8-3a60613c8673 00:15:16.436 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=d9574a11-f4e9-4a2b-85d8-3a60613c8673 00:15:16.436 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:16.436 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:16.436 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:16.436 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d9574a11-f4e9-4a2b-85d8-3a60613c8673 00:15:16.694 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:16.694 { 00:15:16.694 "name": "d9574a11-f4e9-4a2b-85d8-3a60613c8673", 00:15:16.694 "aliases": [ 00:15:16.694 "lvs/nvme0n1p0" 00:15:16.694 ], 00:15:16.694 "product_name": "Logical Volume", 00:15:16.694 "block_size": 4096, 00:15:16.694 "num_blocks": 26476544, 00:15:16.694 "uuid": "d9574a11-f4e9-4a2b-85d8-3a60613c8673", 00:15:16.694 "assigned_rate_limits": { 00:15:16.694 "rw_ios_per_sec": 0, 00:15:16.694 "rw_mbytes_per_sec": 0, 00:15:16.694 "r_mbytes_per_sec": 0, 00:15:16.694 "w_mbytes_per_sec": 0 00:15:16.694 }, 00:15:16.694 "claimed": false, 00:15:16.694 "zoned": false, 00:15:16.694 "supported_io_types": { 00:15:16.694 "read": true, 00:15:16.694 "write": true, 00:15:16.694 "unmap": true, 00:15:16.694 "flush": false, 00:15:16.694 "reset": true, 00:15:16.694 "nvme_admin": false, 00:15:16.694 "nvme_io": false, 00:15:16.694 "nvme_io_md": false, 00:15:16.694 "write_zeroes": true, 00:15:16.694 "zcopy": false, 00:15:16.694 "get_zone_info": false, 00:15:16.694 "zone_management": false, 00:15:16.694 "zone_append": false, 00:15:16.694 "compare": false, 00:15:16.694 "compare_and_write": false, 00:15:16.694 "abort": false, 00:15:16.694 "seek_hole": true, 00:15:16.694 "seek_data": true, 00:15:16.694 "copy": false, 00:15:16.694 "nvme_iov_md": false 00:15:16.694 }, 00:15:16.694 "driver_specific": { 00:15:16.694 "lvol": { 00:15:16.694 "lvol_store_uuid": "e2897c94-efe7-4c92-8715-e856b15d3a9b", 00:15:16.694 "base_bdev": "nvme0n1", 00:15:16.694 "thin_provision": true, 00:15:16.694 "num_allocated_clusters": 0, 00:15:16.694 "snapshot": false, 00:15:16.694 "clone": false, 00:15:16.694 "esnap_clone": false 00:15:16.694 } 00:15:16.694 } 00:15:16.694 } 00:15:16.694 ]' 00:15:16.694 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:16.694 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:16.694 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:16.694 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:16.694 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:16.694 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:16.694 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:16.694 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:16.694 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:16.953 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:16.953 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:16.953 08:59:10 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size d9574a11-f4e9-4a2b-85d8-3a60613c8673 00:15:16.953 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=d9574a11-f4e9-4a2b-85d8-3a60613c8673 00:15:16.953 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:16.953 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:16.953 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:16.953 08:59:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d9574a11-f4e9-4a2b-85d8-3a60613c8673 00:15:17.211 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:17.211 { 00:15:17.211 "name": "d9574a11-f4e9-4a2b-85d8-3a60613c8673", 00:15:17.211 "aliases": [ 00:15:17.211 "lvs/nvme0n1p0" 00:15:17.211 ], 00:15:17.211 "product_name": "Logical Volume", 00:15:17.211 "block_size": 4096, 00:15:17.211 "num_blocks": 26476544, 00:15:17.211 "uuid": "d9574a11-f4e9-4a2b-85d8-3a60613c8673", 00:15:17.211 "assigned_rate_limits": { 00:15:17.211 "rw_ios_per_sec": 0, 00:15:17.211 "rw_mbytes_per_sec": 0, 00:15:17.211 "r_mbytes_per_sec": 0, 00:15:17.211 "w_mbytes_per_sec": 0 00:15:17.211 }, 00:15:17.211 "claimed": false, 00:15:17.211 "zoned": false, 00:15:17.211 "supported_io_types": { 00:15:17.211 "read": true, 00:15:17.211 "write": true, 00:15:17.211 "unmap": true, 00:15:17.211 "flush": false, 00:15:17.211 "reset": true, 00:15:17.211 "nvme_admin": false, 00:15:17.211 "nvme_io": false, 00:15:17.211 "nvme_io_md": false, 00:15:17.211 "write_zeroes": true, 00:15:17.211 "zcopy": false, 00:15:17.211 "get_zone_info": false, 00:15:17.211 "zone_management": false, 00:15:17.211 "zone_append": false, 00:15:17.211 "compare": false, 00:15:17.211 "compare_and_write": false, 00:15:17.211 "abort": false, 00:15:17.211 "seek_hole": true, 00:15:17.211 "seek_data": true, 00:15:17.211 "copy": false, 00:15:17.211 "nvme_iov_md": false 00:15:17.211 }, 00:15:17.211 "driver_specific": { 00:15:17.211 "lvol": { 00:15:17.211 "lvol_store_uuid": "e2897c94-efe7-4c92-8715-e856b15d3a9b", 00:15:17.211 "base_bdev": "nvme0n1", 00:15:17.211 "thin_provision": true, 00:15:17.211 "num_allocated_clusters": 0, 00:15:17.211 "snapshot": false, 00:15:17.211 "clone": false, 00:15:17.211 "esnap_clone": false 00:15:17.211 } 00:15:17.211 } 00:15:17.211 } 00:15:17.211 ]' 00:15:17.211 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:17.211 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:17.211 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:17.211 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:17.211 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:17.211 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:17.211 08:59:11 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:17.211 08:59:11 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:17.470 08:59:11 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:17.470 08:59:11 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:17.470 08:59:11 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:17.470 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:17.470 08:59:11 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size d9574a11-f4e9-4a2b-85d8-3a60613c8673 00:15:17.470 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=d9574a11-f4e9-4a2b-85d8-3a60613c8673 00:15:17.470 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:17.470 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:17.470 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:17.470 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d9574a11-f4e9-4a2b-85d8-3a60613c8673 00:15:17.728 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:17.728 { 00:15:17.728 "name": "d9574a11-f4e9-4a2b-85d8-3a60613c8673", 00:15:17.728 "aliases": [ 00:15:17.728 "lvs/nvme0n1p0" 00:15:17.728 ], 00:15:17.728 "product_name": "Logical Volume", 00:15:17.728 "block_size": 4096, 00:15:17.728 "num_blocks": 26476544, 00:15:17.728 "uuid": "d9574a11-f4e9-4a2b-85d8-3a60613c8673", 00:15:17.728 "assigned_rate_limits": { 00:15:17.728 "rw_ios_per_sec": 0, 00:15:17.728 "rw_mbytes_per_sec": 0, 00:15:17.728 "r_mbytes_per_sec": 0, 00:15:17.728 "w_mbytes_per_sec": 0 00:15:17.728 }, 00:15:17.729 "claimed": false, 00:15:17.729 "zoned": false, 00:15:17.729 "supported_io_types": { 00:15:17.729 "read": true, 00:15:17.729 "write": true, 00:15:17.729 "unmap": true, 00:15:17.729 "flush": false, 00:15:17.729 "reset": true, 00:15:17.729 "nvme_admin": false, 00:15:17.729 "nvme_io": false, 00:15:17.729 "nvme_io_md": false, 00:15:17.729 "write_zeroes": true, 00:15:17.729 "zcopy": false, 00:15:17.729 "get_zone_info": false, 00:15:17.729 "zone_management": false, 00:15:17.729 "zone_append": false, 00:15:17.729 "compare": false, 00:15:17.729 "compare_and_write": false, 00:15:17.729 "abort": false, 00:15:17.729 "seek_hole": true, 00:15:17.729 "seek_data": true, 00:15:17.729 "copy": false, 00:15:17.729 "nvme_iov_md": false 00:15:17.729 }, 00:15:17.729 "driver_specific": { 00:15:17.729 "lvol": { 00:15:17.729 "lvol_store_uuid": "e2897c94-efe7-4c92-8715-e856b15d3a9b", 00:15:17.729 "base_bdev": "nvme0n1", 00:15:17.729 "thin_provision": true, 00:15:17.729 "num_allocated_clusters": 0, 00:15:17.729 "snapshot": false, 00:15:17.729 "clone": false, 00:15:17.729 "esnap_clone": false 00:15:17.729 } 00:15:17.729 } 00:15:17.729 } 00:15:17.729 ]' 00:15:17.729 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:17.729 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:17.729 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:17.729 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:17.729 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:17.729 08:59:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:17.729 08:59:11 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:17.729 08:59:11 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:17.729 08:59:11 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d9574a11-f4e9-4a2b-85d8-3a60613c8673 -c nvc0n1p0 --l2p_dram_limit 60 00:15:17.988 [2024-11-28 08:59:11.870188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.988 [2024-11-28 08:59:11.870231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:17.988 [2024-11-28 08:59:11.870242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:17.988 [2024-11-28 08:59:11.870250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.988 [2024-11-28 08:59:11.870304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.988 [2024-11-28 08:59:11.870321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:17.989 [2024-11-28 08:59:11.870338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:17.989 [2024-11-28 08:59:11.870351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.989 [2024-11-28 08:59:11.870389] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:17.989 [2024-11-28 08:59:11.870658] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:17.989 [2024-11-28 08:59:11.870672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.989 [2024-11-28 08:59:11.870680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:17.989 [2024-11-28 08:59:11.870694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:15:17.989 [2024-11-28 08:59:11.870709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.989 [2024-11-28 08:59:11.870740] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c02d2b92-8bc5-43f0-a7cf-457532e510d9 00:15:17.989 [2024-11-28 08:59:11.872015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.989 [2024-11-28 08:59:11.872043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:17.989 [2024-11-28 08:59:11.872055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:15:17.989 [2024-11-28 08:59:11.872063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.989 [2024-11-28 08:59:11.878753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.989 [2024-11-28 08:59:11.878780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:17.989 [2024-11-28 08:59:11.878789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.596 ms 00:15:17.989 [2024-11-28 08:59:11.878796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.989 [2024-11-28 08:59:11.878903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.989 [2024-11-28 08:59:11.878921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:17.989 [2024-11-28 08:59:11.878930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:15:17.989 [2024-11-28 08:59:11.878936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.989 [2024-11-28 08:59:11.879007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.989 [2024-11-28 08:59:11.879015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:17.989 [2024-11-28 08:59:11.879024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:17.989 [2024-11-28 08:59:11.879030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.989 [2024-11-28 08:59:11.879066] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:17.989 [2024-11-28 08:59:11.880678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.989 [2024-11-28 08:59:11.880707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:17.989 [2024-11-28 08:59:11.880716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:15:17.989 [2024-11-28 08:59:11.880725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.989 [2024-11-28 08:59:11.880761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.989 [2024-11-28 08:59:11.880771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:17.989 [2024-11-28 08:59:11.880778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:17.989 [2024-11-28 08:59:11.880787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.989 [2024-11-28 08:59:11.880815] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:17.989 [2024-11-28 08:59:11.880938] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:17.989 [2024-11-28 08:59:11.880949] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:17.989 [2024-11-28 08:59:11.880960] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:17.989 [2024-11-28 08:59:11.880968] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:17.989 [2024-11-28 08:59:11.880978] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:17.989 [2024-11-28 08:59:11.880984] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:17.989 [2024-11-28 08:59:11.881003] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:17.989 [2024-11-28 08:59:11.881009] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:17.989 [2024-11-28 08:59:11.881017] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:17.989 [2024-11-28 08:59:11.881023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.989 [2024-11-28 08:59:11.881030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:17.989 [2024-11-28 08:59:11.881045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:15:17.989 [2024-11-28 08:59:11.881053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.989 [2024-11-28 08:59:11.881124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.989 [2024-11-28 08:59:11.881134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:17.989 [2024-11-28 08:59:11.881141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:15:17.989 [2024-11-28 08:59:11.881149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.989 [2024-11-28 08:59:11.881239] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:17.989 [2024-11-28 08:59:11.881249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:17.989 [2024-11-28 08:59:11.881263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:17.989 [2024-11-28 08:59:11.881271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:17.989 [2024-11-28 08:59:11.881278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:17.989 [2024-11-28 08:59:11.881286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:17.989 [2024-11-28 08:59:11.881291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:17.989 [2024-11-28 08:59:11.881297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:17.989 [2024-11-28 08:59:11.881302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:17.989 [2024-11-28 08:59:11.881309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:17.989 [2024-11-28 08:59:11.881315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:17.989 [2024-11-28 08:59:11.881322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:17.989 [2024-11-28 08:59:11.881328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:17.989 [2024-11-28 08:59:11.881339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:17.989 [2024-11-28 08:59:11.881344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:17.989 [2024-11-28 08:59:11.881356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:17.989 [2024-11-28 08:59:11.881362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:17.989 [2024-11-28 08:59:11.881369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:17.989 [2024-11-28 08:59:11.881375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:17.989 [2024-11-28 08:59:11.881384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:17.989 [2024-11-28 08:59:11.881401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:17.989 [2024-11-28 08:59:11.881408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:17.989 [2024-11-28 08:59:11.881414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:17.989 [2024-11-28 08:59:11.881422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:17.989 [2024-11-28 08:59:11.881428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:17.989 [2024-11-28 08:59:11.881435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:17.989 [2024-11-28 08:59:11.881441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:17.989 [2024-11-28 08:59:11.881448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:17.989 [2024-11-28 08:59:11.881454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:17.989 [2024-11-28 08:59:11.881463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:17.989 [2024-11-28 08:59:11.881469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:17.989 [2024-11-28 08:59:11.881477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:17.989 [2024-11-28 08:59:11.881483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:17.989 [2024-11-28 08:59:11.881490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:17.989 [2024-11-28 08:59:11.881497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:17.989 [2024-11-28 08:59:11.881504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:17.989 [2024-11-28 08:59:11.881510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:17.989 [2024-11-28 08:59:11.881518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:17.989 [2024-11-28 08:59:11.881525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:17.989 [2024-11-28 08:59:11.881532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:17.989 [2024-11-28 08:59:11.881537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:17.989 [2024-11-28 08:59:11.881545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:17.989 [2024-11-28 08:59:11.881550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:17.989 [2024-11-28 08:59:11.881558] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:17.989 [2024-11-28 08:59:11.881565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:17.989 [2024-11-28 08:59:11.881582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:17.989 [2024-11-28 08:59:11.881589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:17.989 [2024-11-28 08:59:11.881608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:17.989 [2024-11-28 08:59:11.881614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:17.989 [2024-11-28 08:59:11.881621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:17.989 [2024-11-28 08:59:11.881627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:17.989 [2024-11-28 08:59:11.881635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:17.989 [2024-11-28 08:59:11.881642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:17.989 [2024-11-28 08:59:11.881652] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:17.990 [2024-11-28 08:59:11.881662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:17.990 [2024-11-28 08:59:11.881671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:17.990 [2024-11-28 08:59:11.881678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:17.990 [2024-11-28 08:59:11.881687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:17.990 [2024-11-28 08:59:11.881692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:17.990 [2024-11-28 08:59:11.881700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:17.990 [2024-11-28 08:59:11.881707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:17.990 [2024-11-28 08:59:11.881716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:17.990 [2024-11-28 08:59:11.881721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:17.990 [2024-11-28 08:59:11.881728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:17.990 [2024-11-28 08:59:11.881733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:17.990 [2024-11-28 08:59:11.881740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:17.990 [2024-11-28 08:59:11.881746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:17.990 [2024-11-28 08:59:11.881753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:17.990 [2024-11-28 08:59:11.881758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:17.990 [2024-11-28 08:59:11.881765] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:17.990 [2024-11-28 08:59:11.881771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:17.990 [2024-11-28 08:59:11.881778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:17.990 [2024-11-28 08:59:11.881784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:17.990 [2024-11-28 08:59:11.881791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:17.990 [2024-11-28 08:59:11.881807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:17.990 [2024-11-28 08:59:11.881815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.990 [2024-11-28 08:59:11.881822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:17.990 [2024-11-28 08:59:11.881831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:15:17.990 [2024-11-28 08:59:11.881837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.990 [2024-11-28 08:59:11.881894] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:17.990 [2024-11-28 08:59:11.881911] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:20.521 [2024-11-28 08:59:14.231900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.521 [2024-11-28 08:59:14.231973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:20.521 [2024-11-28 08:59:14.231989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2349.990 ms 00:15:20.521 [2024-11-28 08:59:14.231996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.521 [2024-11-28 08:59:14.252845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.521 [2024-11-28 08:59:14.252935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:20.521 [2024-11-28 08:59:14.252972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.745 ms 00:15:20.521 [2024-11-28 08:59:14.252994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.521 [2024-11-28 08:59:14.253324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.521 [2024-11-28 08:59:14.253355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:20.521 [2024-11-28 08:59:14.253382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:15:20.521 [2024-11-28 08:59:14.253403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.521 [2024-11-28 08:59:14.267879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.267909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:20.522 [2024-11-28 08:59:14.267919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.354 ms 00:15:20.522 [2024-11-28 08:59:14.267926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.267960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.267967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:20.522 [2024-11-28 08:59:14.267975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:20.522 [2024-11-28 08:59:14.267981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.268400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.268413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:20.522 [2024-11-28 08:59:14.268422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:15:20.522 [2024-11-28 08:59:14.268429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.268543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.268552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:20.522 [2024-11-28 08:59:14.268561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:15:20.522 [2024-11-28 08:59:14.268568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.275136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.275163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:20.522 [2024-11-28 08:59:14.275174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.543 ms 00:15:20.522 [2024-11-28 08:59:14.275189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.282565] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:20.522 [2024-11-28 08:59:14.298040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.298070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:20.522 [2024-11-28 08:59:14.298079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.789 ms 00:15:20.522 [2024-11-28 08:59:14.298088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.339224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.339261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:20.522 [2024-11-28 08:59:14.339274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.093 ms 00:15:20.522 [2024-11-28 08:59:14.339284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.339444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.339454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:20.522 [2024-11-28 08:59:14.339464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:15:20.522 [2024-11-28 08:59:14.339471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.342115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.342155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:20.522 [2024-11-28 08:59:14.342164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.603 ms 00:15:20.522 [2024-11-28 08:59:14.342176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.344280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.344310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:20.522 [2024-11-28 08:59:14.344319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.071 ms 00:15:20.522 [2024-11-28 08:59:14.344327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.344581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.344630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:20.522 [2024-11-28 08:59:14.344637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:15:20.522 [2024-11-28 08:59:14.344646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.371469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.371507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:20.522 [2024-11-28 08:59:14.371520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.789 ms 00:15:20.522 [2024-11-28 08:59:14.371528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.375302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.375334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:20.522 [2024-11-28 08:59:14.375344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.716 ms 00:15:20.522 [2024-11-28 08:59:14.375352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.377988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.378016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:20.522 [2024-11-28 08:59:14.378024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:15:20.522 [2024-11-28 08:59:14.378032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.380715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.380747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:20.522 [2024-11-28 08:59:14.380756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.649 ms 00:15:20.522 [2024-11-28 08:59:14.380767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.380816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.380827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:20.522 [2024-11-28 08:59:14.380835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:20.522 [2024-11-28 08:59:14.380844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.380910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:20.522 [2024-11-28 08:59:14.380937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:20.522 [2024-11-28 08:59:14.380945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:15:20.522 [2024-11-28 08:59:14.380955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:20.522 [2024-11-28 08:59:14.381887] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2511.292 ms, result 0 00:15:20.522 { 00:15:20.522 "name": "ftl0", 00:15:20.522 "uuid": "c02d2b92-8bc5-43f0-a7cf-457532e510d9" 00:15:20.522 } 00:15:20.522 08:59:14 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:20.522 08:59:14 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:20.522 08:59:14 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:20.522 08:59:14 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:20.522 08:59:14 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:20.522 08:59:14 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:20.522 08:59:14 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:20.522 08:59:14 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:20.781 [ 00:15:20.781 { 00:15:20.781 "name": "ftl0", 00:15:20.781 "aliases": [ 00:15:20.781 "c02d2b92-8bc5-43f0-a7cf-457532e510d9" 00:15:20.781 ], 00:15:20.781 "product_name": "FTL disk", 00:15:20.781 "block_size": 4096, 00:15:20.781 "num_blocks": 20971520, 00:15:20.781 "uuid": "c02d2b92-8bc5-43f0-a7cf-457532e510d9", 00:15:20.781 "assigned_rate_limits": { 00:15:20.781 "rw_ios_per_sec": 0, 00:15:20.781 "rw_mbytes_per_sec": 0, 00:15:20.781 "r_mbytes_per_sec": 0, 00:15:20.781 "w_mbytes_per_sec": 0 00:15:20.781 }, 00:15:20.781 "claimed": false, 00:15:20.781 "zoned": false, 00:15:20.781 "supported_io_types": { 00:15:20.781 "read": true, 00:15:20.781 "write": true, 00:15:20.781 "unmap": true, 00:15:20.781 "flush": true, 00:15:20.781 "reset": false, 00:15:20.781 "nvme_admin": false, 00:15:20.781 "nvme_io": false, 00:15:20.781 "nvme_io_md": false, 00:15:20.781 "write_zeroes": true, 00:15:20.781 "zcopy": false, 00:15:20.781 "get_zone_info": false, 00:15:20.781 "zone_management": false, 00:15:20.781 "zone_append": false, 00:15:20.781 "compare": false, 00:15:20.781 "compare_and_write": false, 00:15:20.781 "abort": false, 00:15:20.781 "seek_hole": false, 00:15:20.781 "seek_data": false, 00:15:20.781 "copy": false, 00:15:20.781 "nvme_iov_md": false 00:15:20.781 }, 00:15:20.781 "driver_specific": { 00:15:20.781 "ftl": { 00:15:20.781 "base_bdev": "d9574a11-f4e9-4a2b-85d8-3a60613c8673", 00:15:20.781 "cache": "nvc0n1p0" 00:15:20.781 } 00:15:20.781 } 00:15:20.781 } 00:15:20.781 ] 00:15:20.781 08:59:14 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:20.781 08:59:14 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:20.781 08:59:14 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:21.039 08:59:15 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:21.039 08:59:15 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:21.299 [2024-11-28 08:59:15.175898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.299 [2024-11-28 08:59:15.175942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:21.299 [2024-11-28 08:59:15.175956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:21.299 [2024-11-28 08:59:15.175963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.299 [2024-11-28 08:59:15.175995] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:21.299 [2024-11-28 08:59:15.176544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.299 [2024-11-28 08:59:15.176565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:21.299 [2024-11-28 08:59:15.176588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:15:21.299 [2024-11-28 08:59:15.176605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.299 [2024-11-28 08:59:15.177014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.299 [2024-11-28 08:59:15.177036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:21.299 [2024-11-28 08:59:15.177045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:15:21.299 [2024-11-28 08:59:15.177054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.299 [2024-11-28 08:59:15.179453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.299 [2024-11-28 08:59:15.179484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:21.299 [2024-11-28 08:59:15.179492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.382 ms 00:15:21.299 [2024-11-28 08:59:15.179507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.299 [2024-11-28 08:59:15.184078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.299 [2024-11-28 08:59:15.184108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:21.299 [2024-11-28 08:59:15.184116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.553 ms 00:15:21.299 [2024-11-28 08:59:15.184124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.299 [2024-11-28 08:59:15.185662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.299 [2024-11-28 08:59:15.185700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:21.299 [2024-11-28 08:59:15.185708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 00:15:21.299 [2024-11-28 08:59:15.185716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.299 [2024-11-28 08:59:15.189775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.299 [2024-11-28 08:59:15.189816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:21.299 [2024-11-28 08:59:15.189825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.023 ms 00:15:21.299 [2024-11-28 08:59:15.189834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.299 [2024-11-28 08:59:15.189986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.299 [2024-11-28 08:59:15.190002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:21.299 [2024-11-28 08:59:15.190009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:15:21.299 [2024-11-28 08:59:15.190017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.299 [2024-11-28 08:59:15.191246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.299 [2024-11-28 08:59:15.191275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:21.299 [2024-11-28 08:59:15.191282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:15:21.299 [2024-11-28 08:59:15.191290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.299 [2024-11-28 08:59:15.192215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.299 [2024-11-28 08:59:15.192246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:21.299 [2024-11-28 08:59:15.192253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:15:21.299 [2024-11-28 08:59:15.192262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.299 [2024-11-28 08:59:15.193107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.299 [2024-11-28 08:59:15.193137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:21.299 [2024-11-28 08:59:15.193144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:15:21.299 [2024-11-28 08:59:15.193152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.299 [2024-11-28 08:59:15.194015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.299 [2024-11-28 08:59:15.194046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:21.299 [2024-11-28 08:59:15.194053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:15:21.299 [2024-11-28 08:59:15.194062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.299 [2024-11-28 08:59:15.194096] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:21.299 [2024-11-28 08:59:15.194110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:21.299 [2024-11-28 08:59:15.194304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:21.300 [2024-11-28 08:59:15.194830] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:21.300 [2024-11-28 08:59:15.194837] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c02d2b92-8bc5-43f0-a7cf-457532e510d9 00:15:21.300 [2024-11-28 08:59:15.194846] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:21.300 [2024-11-28 08:59:15.194852] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:21.300 [2024-11-28 08:59:15.194862] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:21.300 [2024-11-28 08:59:15.194868] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:21.300 [2024-11-28 08:59:15.194876] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:21.300 [2024-11-28 08:59:15.194882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:21.300 [2024-11-28 08:59:15.194890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:21.300 [2024-11-28 08:59:15.194896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:21.300 [2024-11-28 08:59:15.194902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:21.300 [2024-11-28 08:59:15.194908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.300 [2024-11-28 08:59:15.194916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:21.300 [2024-11-28 08:59:15.194923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:15:21.300 [2024-11-28 08:59:15.194930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.300 [2024-11-28 08:59:15.196711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.300 [2024-11-28 08:59:15.196737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:21.300 [2024-11-28 08:59:15.196744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.758 ms 00:15:21.300 [2024-11-28 08:59:15.196752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.300 [2024-11-28 08:59:15.196880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:21.300 [2024-11-28 08:59:15.196891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:21.300 [2024-11-28 08:59:15.196897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:15:21.301 [2024-11-28 08:59:15.196904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.301 [2024-11-28 08:59:15.202967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.301 [2024-11-28 08:59:15.202999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:21.301 [2024-11-28 08:59:15.203008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.301 [2024-11-28 08:59:15.203017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.301 [2024-11-28 08:59:15.203074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.301 [2024-11-28 08:59:15.203083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:21.301 [2024-11-28 08:59:15.203089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.301 [2024-11-28 08:59:15.203107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.301 [2024-11-28 08:59:15.203180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.301 [2024-11-28 08:59:15.203195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:21.301 [2024-11-28 08:59:15.203202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.301 [2024-11-28 08:59:15.203210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.301 [2024-11-28 08:59:15.203231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.301 [2024-11-28 08:59:15.203240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:21.301 [2024-11-28 08:59:15.203262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.301 [2024-11-28 08:59:15.203271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.301 [2024-11-28 08:59:15.214672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.301 [2024-11-28 08:59:15.214707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:21.301 [2024-11-28 08:59:15.214716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.301 [2024-11-28 08:59:15.214724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.301 [2024-11-28 08:59:15.223897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.301 [2024-11-28 08:59:15.223933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:21.301 [2024-11-28 08:59:15.223942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.301 [2024-11-28 08:59:15.223959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.301 [2024-11-28 08:59:15.224040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.301 [2024-11-28 08:59:15.224053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:21.301 [2024-11-28 08:59:15.224062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.301 [2024-11-28 08:59:15.224070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.301 [2024-11-28 08:59:15.224113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.301 [2024-11-28 08:59:15.224122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:21.301 [2024-11-28 08:59:15.224129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.301 [2024-11-28 08:59:15.224136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.301 [2024-11-28 08:59:15.224210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.301 [2024-11-28 08:59:15.224221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:21.301 [2024-11-28 08:59:15.224228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.301 [2024-11-28 08:59:15.224237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.301 [2024-11-28 08:59:15.224275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.301 [2024-11-28 08:59:15.224285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:21.301 [2024-11-28 08:59:15.224291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.301 [2024-11-28 08:59:15.224298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.301 [2024-11-28 08:59:15.224341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.301 [2024-11-28 08:59:15.224351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:21.301 [2024-11-28 08:59:15.224358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.301 [2024-11-28 08:59:15.224367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.301 [2024-11-28 08:59:15.224420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:21.301 [2024-11-28 08:59:15.224430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:21.301 [2024-11-28 08:59:15.224437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:21.301 [2024-11-28 08:59:15.224445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:21.301 [2024-11-28 08:59:15.224612] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.684 ms, result 0 00:15:21.301 true 00:15:21.301 08:59:15 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 84425 00:15:21.301 08:59:15 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 84425 ']' 00:15:21.301 08:59:15 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 84425 00:15:21.301 08:59:15 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:21.301 08:59:15 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.301 08:59:15 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84425 00:15:21.301 08:59:15 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.301 08:59:15 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.301 killing process with pid 84425 00:15:21.301 08:59:15 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84425' 00:15:21.301 08:59:15 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 84425 00:15:21.301 08:59:15 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 84425 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:24.651 08:59:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:24.651 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:24.651 fio-3.35 00:15:24.651 Starting 1 thread 00:15:29.929 00:15:29.929 test: (groupid=0, jobs=1): err= 0: pid=84589: Thu Nov 28 08:59:23 2024 00:15:29.929 read: IOPS=842, BW=56.0MiB/s (58.7MB/s)(255MiB/4547msec) 00:15:29.929 slat (nsec): min=2935, max=26542, avg=4455.56, stdev=2208.49 00:15:29.929 clat (usec): min=281, max=1218, avg=533.75, stdev=143.22 00:15:29.929 lat (usec): min=288, max=1223, avg=538.20, stdev=143.32 00:15:29.929 clat percentiles (usec): 00:15:29.929 | 1.00th=[ 302], 5.00th=[ 322], 10.00th=[ 375], 20.00th=[ 445], 00:15:29.929 | 30.00th=[ 469], 40.00th=[ 510], 50.00th=[ 523], 60.00th=[ 529], 00:15:29.929 | 70.00th=[ 537], 80.00th=[ 586], 90.00th=[ 783], 95.00th=[ 848], 00:15:29.929 | 99.00th=[ 947], 99.50th=[ 979], 99.90th=[ 1172], 99.95th=[ 1188], 00:15:29.929 | 99.99th=[ 1221] 00:15:29.929 write: IOPS=848, BW=56.4MiB/s (59.1MB/s)(256MiB/4542msec); 0 zone resets 00:15:29.929 slat (nsec): min=13566, max=80860, avg=21642.36, stdev=7050.61 00:15:29.929 clat (usec): min=296, max=1703, avg=607.99, stdev=163.30 00:15:29.929 lat (usec): min=324, max=1746, avg=629.63, stdev=163.37 00:15:29.929 clat percentiles (usec): 00:15:29.929 | 1.00th=[ 338], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 482], 00:15:29.929 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 611], 00:15:29.929 | 70.00th=[ 619], 80.00th=[ 652], 90.00th=[ 881], 95.00th=[ 930], 00:15:29.929 | 99.00th=[ 1156], 99.50th=[ 1221], 99.90th=[ 1565], 99.95th=[ 1582], 00:15:29.929 | 99.99th=[ 1696] 00:15:29.929 bw ( KiB/s): min=50592, max=68952, per=99.86%, avg=57648.89, stdev=6489.08, samples=9 00:15:29.929 iops : min= 744, max= 1014, avg=847.78, stdev=95.43, samples=9 00:15:29.929 lat (usec) : 500=31.10%, 750=55.87%, 1000=11.61% 00:15:29.929 lat (msec) : 2=1.42% 00:15:29.929 cpu : usr=99.25%, sys=0.09%, ctx=6, majf=0, minf=1181 00:15:29.929 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.930 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.930 00:15:29.930 Run status group 0 (all jobs): 00:15:29.930 READ: bw=56.0MiB/s (58.7MB/s), 56.0MiB/s-56.0MiB/s (58.7MB/s-58.7MB/s), io=255MiB (267MB), run=4547-4547msec 00:15:29.930 WRITE: bw=56.4MiB/s (59.1MB/s), 56.4MiB/s-56.4MiB/s (59.1MB/s-59.1MB/s), io=256MiB (269MB), run=4542-4542msec 00:15:30.191 ----------------------------------------------------- 00:15:30.191 Suppressions used: 00:15:30.191 count bytes template 00:15:30.191 1 5 /usr/src/fio/parse.c 00:15:30.191 1 8 libtcmalloc_minimal.so 00:15:30.191 1 904 libcrypto.so 00:15:30.191 ----------------------------------------------------- 00:15:30.191 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:30.191 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:30.192 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:30.192 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:30.192 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:30.192 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:30.192 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:30.192 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:30.192 08:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:30.453 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:30.453 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:30.453 fio-3.35 00:15:30.453 Starting 2 threads 00:15:57.007 00:15:57.007 first_half: (groupid=0, jobs=1): err= 0: pid=84686: Thu Nov 28 08:59:47 2024 00:15:57.007 read: IOPS=2957, BW=11.6MiB/s (12.1MB/s)(255MiB/22061msec) 00:15:57.007 slat (nsec): min=2949, max=30797, avg=3821.58, stdev=1098.79 00:15:57.007 clat (usec): min=537, max=285657, avg=32492.17, stdev=16290.55 00:15:57.007 lat (usec): min=543, max=285662, avg=32495.99, stdev=16290.71 00:15:57.007 clat percentiles (msec): 00:15:57.007 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 27], 20.00th=[ 29], 00:15:57.007 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:15:57.007 | 70.00th=[ 32], 80.00th=[ 33], 90.00th=[ 37], 95.00th=[ 41], 00:15:57.007 | 99.00th=[ 122], 99.50th=[ 140], 99.90th=[ 192], 99.95th=[ 220], 00:15:57.007 | 99.99th=[ 279] 00:15:57.007 write: IOPS=3491, BW=13.6MiB/s (14.3MB/s)(256MiB/18770msec); 0 zone resets 00:15:57.007 slat (usec): min=3, max=721, avg= 5.86, stdev= 4.29 00:15:57.007 clat (usec): min=327, max=71934, avg=10709.76, stdev=16416.71 00:15:57.007 lat (usec): min=333, max=71940, avg=10715.63, stdev=16416.98 00:15:57.007 clat percentiles (usec): 00:15:57.007 | 1.00th=[ 586], 5.00th=[ 742], 10.00th=[ 881], 20.00th=[ 1172], 00:15:57.007 | 30.00th=[ 2769], 40.00th=[ 3818], 50.00th=[ 4686], 60.00th=[ 5342], 00:15:57.007 | 70.00th=[ 6128], 80.00th=[11731], 90.00th=[26870], 95.00th=[56886], 00:15:57.007 | 99.00th=[65274], 99.50th=[66847], 99.90th=[68682], 99.95th=[69731], 00:15:57.007 | 99.99th=[71828] 00:15:57.007 bw ( KiB/s): min= 928, max=40288, per=75.08%, avg=20971.52, stdev=10344.89, samples=25 00:15:57.007 iops : min= 232, max=10072, avg=5242.88, stdev=2586.22, samples=25 00:15:57.007 lat (usec) : 500=0.05%, 750=2.56%, 1000=4.89% 00:15:57.007 lat (msec) : 2=5.19%, 4=8.77%, 10=18.75%, 20=4.42%, 50=49.07% 00:15:57.007 lat (msec) : 100=5.44%, 250=0.85%, 500=0.02% 00:15:57.007 cpu : usr=99.26%, sys=0.15%, ctx=32, majf=0, minf=5539 00:15:57.007 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:57.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.007 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:57.007 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.007 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:57.007 second_half: (groupid=0, jobs=1): err= 0: pid=84688: Thu Nov 28 08:59:47 2024 00:15:57.007 read: IOPS=2975, BW=11.6MiB/s (12.2MB/s)(254MiB/21892msec) 00:15:57.007 slat (nsec): min=3028, max=37888, avg=5454.87, stdev=1495.37 00:15:57.007 clat (usec): min=572, max=274766, avg=33075.95, stdev=15518.78 00:15:57.007 lat (usec): min=577, max=274772, avg=33081.41, stdev=15518.87 00:15:57.007 clat percentiles (msec): 00:15:57.007 | 1.00th=[ 4], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 30], 00:15:57.007 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:15:57.007 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 38], 95.00th=[ 43], 00:15:57.007 | 99.00th=[ 116], 99.50th=[ 138], 99.90th=[ 159], 99.95th=[ 199], 00:15:57.007 | 99.99th=[ 259] 00:15:57.007 write: IOPS=4637, BW=18.1MiB/s (19.0MB/s)(256MiB/14132msec); 0 zone resets 00:15:57.007 slat (usec): min=3, max=477, avg= 7.00, stdev= 3.79 00:15:57.007 clat (usec): min=334, max=72030, avg=9855.23, stdev=16253.00 00:15:57.007 lat (usec): min=344, max=72036, avg=9862.24, stdev=16253.31 00:15:57.007 clat percentiles (usec): 00:15:57.007 | 1.00th=[ 652], 5.00th=[ 799], 10.00th=[ 889], 20.00th=[ 1037], 00:15:57.007 | 30.00th=[ 1270], 40.00th=[ 2737], 50.00th=[ 3916], 60.00th=[ 4883], 00:15:57.007 | 70.00th=[ 5866], 80.00th=[10814], 90.00th=[23725], 95.00th=[56886], 00:15:57.007 | 99.00th=[64750], 99.50th=[66323], 99.90th=[69731], 99.95th=[69731], 00:15:57.007 | 99.99th=[71828] 00:15:57.007 bw ( KiB/s): min= 4696, max=40984, per=100.00%, avg=29127.11, stdev=10386.49, samples=18 00:15:57.007 iops : min= 1174, max=10246, avg=7281.78, stdev=2596.62, samples=18 00:15:57.007 lat (usec) : 500=0.02%, 750=1.61%, 1000=7.23% 00:15:57.007 lat (msec) : 2=8.81%, 4=8.40%, 10=13.93%, 20=4.58%, 50=48.94% 00:15:57.007 lat (msec) : 100=5.65%, 250=0.80%, 500=0.01% 00:15:57.007 cpu : usr=99.29%, sys=0.17%, ctx=130, majf=0, minf=5601 00:15:57.007 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:57.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.007 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:57.007 issued rwts: total=65142,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.007 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:57.007 00:15:57.007 Run status group 0 (all jobs): 00:15:57.007 READ: bw=23.1MiB/s (24.2MB/s), 11.6MiB/s-11.6MiB/s (12.1MB/s-12.2MB/s), io=509MiB (534MB), run=21892-22061msec 00:15:57.007 WRITE: bw=27.3MiB/s (28.6MB/s), 13.6MiB/s-18.1MiB/s (14.3MB/s-19.0MB/s), io=512MiB (537MB), run=14132-18770msec 00:15:57.007 ----------------------------------------------------- 00:15:57.007 Suppressions used: 00:15:57.007 count bytes template 00:15:57.007 2 10 /usr/src/fio/parse.c 00:15:57.007 2 192 /usr/src/fio/iolog.c 00:15:57.007 1 8 libtcmalloc_minimal.so 00:15:57.007 1 904 libcrypto.so 00:15:57.007 ----------------------------------------------------- 00:15:57.007 00:15:57.007 08:59:48 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:57.007 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:57.007 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:57.007 08:59:48 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:57.007 08:59:48 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:57.007 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:57.007 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:57.008 08:59:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:57.008 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:57.008 fio-3.35 00:15:57.008 Starting 1 thread 00:16:11.950 00:16:11.950 test: (groupid=0, jobs=1): err= 0: pid=84978: Thu Nov 28 09:00:03 2024 00:16:11.950 read: IOPS=8180, BW=32.0MiB/s (33.5MB/s)(255MiB/7970msec) 00:16:11.950 slat (nsec): min=2927, max=17715, avg=3376.81, stdev=584.35 00:16:11.950 clat (usec): min=1458, max=28990, avg=15640.18, stdev=2061.06 00:16:11.950 lat (usec): min=1464, max=28994, avg=15643.56, stdev=2061.08 00:16:11.950 clat percentiles (usec): 00:16:11.950 | 1.00th=[13960], 5.00th=[14222], 10.00th=[14353], 20.00th=[14484], 00:16:11.950 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[15139], 00:16:11.950 | 70.00th=[15270], 80.00th=[15795], 90.00th=[19006], 95.00th=[20841], 00:16:11.950 | 99.00th=[22676], 99.50th=[23462], 99.90th=[25297], 99.95th=[26084], 00:16:11.950 | 99.99th=[28181] 00:16:11.950 write: IOPS=11.1k, BW=43.3MiB/s (45.5MB/s)(256MiB/5906msec); 0 zone resets 00:16:11.950 slat (usec): min=4, max=376, avg= 6.01, stdev= 3.84 00:16:11.950 clat (usec): min=495, max=57243, avg=11471.87, stdev=12561.86 00:16:11.950 lat (usec): min=500, max=57250, avg=11477.88, stdev=12561.89 00:16:11.950 clat percentiles (usec): 00:16:11.950 | 1.00th=[ 750], 5.00th=[ 955], 10.00th=[ 1106], 20.00th=[ 1385], 00:16:11.950 | 30.00th=[ 1745], 40.00th=[ 2540], 50.00th=[ 8586], 60.00th=[10683], 00:16:11.950 | 70.00th=[13173], 80.00th=[16188], 90.00th=[33817], 95.00th=[39584], 00:16:11.950 | 99.00th=[49546], 99.50th=[51643], 99.90th=[54789], 99.95th=[55313], 00:16:11.950 | 99.99th=[56361] 00:16:11.950 bw ( KiB/s): min=33736, max=55336, per=98.43%, avg=43690.67, stdev=7786.78, samples=12 00:16:11.950 iops : min= 8434, max=13834, avg=10922.67, stdev=1946.70, samples=12 00:16:11.950 lat (usec) : 500=0.01%, 750=0.50%, 1000=2.70% 00:16:11.950 lat (msec) : 2=14.59%, 4=3.10%, 10=7.68%, 20=59.80%, 50=11.17% 00:16:11.950 lat (msec) : 100=0.46% 00:16:11.950 cpu : usr=99.15%, sys=0.15%, ctx=63, majf=0, minf=5577 00:16:11.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:11.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.950 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.950 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.950 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.950 00:16:11.950 Run status group 0 (all jobs): 00:16:11.950 READ: bw=32.0MiB/s (33.5MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=255MiB (267MB), run=7970-7970msec 00:16:11.950 WRITE: bw=43.3MiB/s (45.5MB/s), 43.3MiB/s-43.3MiB/s (45.5MB/s-45.5MB/s), io=256MiB (268MB), run=5906-5906msec 00:16:11.950 ----------------------------------------------------- 00:16:11.950 Suppressions used: 00:16:11.950 count bytes template 00:16:11.950 1 5 /usr/src/fio/parse.c 00:16:11.950 2 192 /usr/src/fio/iolog.c 00:16:11.950 1 8 libtcmalloc_minimal.so 00:16:11.950 1 904 libcrypto.so 00:16:11.950 ----------------------------------------------------- 00:16:11.950 00:16:11.950 09:00:04 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:11.950 09:00:04 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:11.950 09:00:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:11.950 09:00:04 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:11.950 Remove shared memory files 00:16:11.950 09:00:04 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:11.950 09:00:04 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:11.950 09:00:04 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:11.950 09:00:04 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:11.950 09:00:04 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69931 /dev/shm/spdk_tgt_trace.pid83371 00:16:11.950 09:00:04 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:11.950 09:00:04 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:11.950 ************************************ 00:16:11.950 END TEST ftl_fio_basic 00:16:11.950 ************************************ 00:16:11.950 00:16:11.950 real 0m56.251s 00:16:11.950 user 2m0.081s 00:16:11.950 sys 0m2.808s 00:16:11.950 09:00:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:11.950 09:00:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:11.950 09:00:04 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:11.950 09:00:04 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:11.950 09:00:04 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:11.950 09:00:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:11.950 ************************************ 00:16:11.950 START TEST ftl_bdevperf 00:16:11.950 ************************************ 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:11.950 * Looking for test storage... 00:16:11.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:11.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.950 --rc genhtml_branch_coverage=1 00:16:11.950 --rc genhtml_function_coverage=1 00:16:11.950 --rc genhtml_legend=1 00:16:11.950 --rc geninfo_all_blocks=1 00:16:11.950 --rc geninfo_unexecuted_blocks=1 00:16:11.950 00:16:11.950 ' 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:11.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.950 --rc genhtml_branch_coverage=1 00:16:11.950 --rc genhtml_function_coverage=1 00:16:11.950 --rc genhtml_legend=1 00:16:11.950 --rc geninfo_all_blocks=1 00:16:11.950 --rc geninfo_unexecuted_blocks=1 00:16:11.950 00:16:11.950 ' 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:11.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.950 --rc genhtml_branch_coverage=1 00:16:11.950 --rc genhtml_function_coverage=1 00:16:11.950 --rc genhtml_legend=1 00:16:11.950 --rc geninfo_all_blocks=1 00:16:11.950 --rc geninfo_unexecuted_blocks=1 00:16:11.950 00:16:11.950 ' 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:11.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.950 --rc genhtml_branch_coverage=1 00:16:11.950 --rc genhtml_function_coverage=1 00:16:11.950 --rc genhtml_legend=1 00:16:11.950 --rc geninfo_all_blocks=1 00:16:11.950 --rc geninfo_unexecuted_blocks=1 00:16:11.950 00:16:11.950 ' 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:11.950 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=85216 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 85216 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 85216 ']' 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:11.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:11.951 09:00:04 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:11.951 [2024-11-28 09:00:04.786930] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:11.951 [2024-11-28 09:00:04.787082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85216 ] 00:16:11.951 [2024-11-28 09:00:04.938623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.951 [2024-11-28 09:00:05.011269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:11.951 09:00:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:12.210 09:00:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:12.210 { 00:16:12.210 "name": "nvme0n1", 00:16:12.210 "aliases": [ 00:16:12.210 "b2733811-1aa0-45ec-92f5-d3eb6b91933e" 00:16:12.210 ], 00:16:12.210 "product_name": "NVMe disk", 00:16:12.210 "block_size": 4096, 00:16:12.210 "num_blocks": 1310720, 00:16:12.210 "uuid": "b2733811-1aa0-45ec-92f5-d3eb6b91933e", 00:16:12.210 "numa_id": -1, 00:16:12.210 "assigned_rate_limits": { 00:16:12.210 "rw_ios_per_sec": 0, 00:16:12.210 "rw_mbytes_per_sec": 0, 00:16:12.210 "r_mbytes_per_sec": 0, 00:16:12.210 "w_mbytes_per_sec": 0 00:16:12.210 }, 00:16:12.210 "claimed": true, 00:16:12.210 "claim_type": "read_many_write_one", 00:16:12.210 "zoned": false, 00:16:12.210 "supported_io_types": { 00:16:12.210 "read": true, 00:16:12.210 "write": true, 00:16:12.210 "unmap": true, 00:16:12.210 "flush": true, 00:16:12.210 "reset": true, 00:16:12.210 "nvme_admin": true, 00:16:12.210 "nvme_io": true, 00:16:12.210 "nvme_io_md": false, 00:16:12.210 "write_zeroes": true, 00:16:12.210 "zcopy": false, 00:16:12.210 "get_zone_info": false, 00:16:12.210 "zone_management": false, 00:16:12.210 "zone_append": false, 00:16:12.210 "compare": true, 00:16:12.210 "compare_and_write": false, 00:16:12.210 "abort": true, 00:16:12.210 "seek_hole": false, 00:16:12.210 "seek_data": false, 00:16:12.210 "copy": true, 00:16:12.210 "nvme_iov_md": false 00:16:12.210 }, 00:16:12.210 "driver_specific": { 00:16:12.210 "nvme": [ 00:16:12.210 { 00:16:12.210 "pci_address": "0000:00:11.0", 00:16:12.210 "trid": { 00:16:12.210 "trtype": "PCIe", 00:16:12.210 "traddr": "0000:00:11.0" 00:16:12.210 }, 00:16:12.210 "ctrlr_data": { 00:16:12.210 "cntlid": 0, 00:16:12.210 "vendor_id": "0x1b36", 00:16:12.210 "model_number": "QEMU NVMe Ctrl", 00:16:12.210 "serial_number": "12341", 00:16:12.210 "firmware_revision": "8.0.0", 00:16:12.210 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:12.210 "oacs": { 00:16:12.210 "security": 0, 00:16:12.210 "format": 1, 00:16:12.210 "firmware": 0, 00:16:12.210 "ns_manage": 1 00:16:12.210 }, 00:16:12.210 "multi_ctrlr": false, 00:16:12.210 "ana_reporting": false 00:16:12.210 }, 00:16:12.210 "vs": { 00:16:12.210 "nvme_version": "1.4" 00:16:12.210 }, 00:16:12.210 "ns_data": { 00:16:12.210 "id": 1, 00:16:12.210 "can_share": false 00:16:12.210 } 00:16:12.210 } 00:16:12.210 ], 00:16:12.210 "mp_policy": "active_passive" 00:16:12.210 } 00:16:12.210 } 00:16:12.210 ]' 00:16:12.210 09:00:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:12.210 09:00:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:12.210 09:00:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:12.210 09:00:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:12.210 09:00:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:12.210 09:00:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:12.210 09:00:06 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:12.210 09:00:06 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:12.210 09:00:06 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:12.210 09:00:06 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:12.211 09:00:06 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:12.470 09:00:06 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=e2897c94-efe7-4c92-8715-e856b15d3a9b 00:16:12.470 09:00:06 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:12.470 09:00:06 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e2897c94-efe7-4c92-8715-e856b15d3a9b 00:16:12.731 09:00:06 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:12.993 09:00:06 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=1cb1710d-d955-49cb-b2d7-50e84144e590 00:16:12.993 09:00:06 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1cb1710d-d955-49cb-b2d7-50e84144e590 00:16:12.993 09:00:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=7cf5f69b-37d5-4fb9-85a9-4048cb3a6595 00:16:12.993 09:00:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7cf5f69b-37d5-4fb9-85a9-4048cb3a6595 00:16:12.993 09:00:07 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:12.993 09:00:07 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:12.993 09:00:07 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=7cf5f69b-37d5-4fb9-85a9-4048cb3a6595 00:16:12.993 09:00:07 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:12.993 09:00:07 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 7cf5f69b-37d5-4fb9-85a9-4048cb3a6595 00:16:12.993 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=7cf5f69b-37d5-4fb9-85a9-4048cb3a6595 00:16:12.993 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:12.993 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:12.993 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:12.993 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7cf5f69b-37d5-4fb9-85a9-4048cb3a6595 00:16:13.254 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:13.254 { 00:16:13.254 "name": "7cf5f69b-37d5-4fb9-85a9-4048cb3a6595", 00:16:13.254 "aliases": [ 00:16:13.254 "lvs/nvme0n1p0" 00:16:13.254 ], 00:16:13.254 "product_name": "Logical Volume", 00:16:13.254 "block_size": 4096, 00:16:13.254 "num_blocks": 26476544, 00:16:13.254 "uuid": "7cf5f69b-37d5-4fb9-85a9-4048cb3a6595", 00:16:13.254 "assigned_rate_limits": { 00:16:13.254 "rw_ios_per_sec": 0, 00:16:13.254 "rw_mbytes_per_sec": 0, 00:16:13.254 "r_mbytes_per_sec": 0, 00:16:13.254 "w_mbytes_per_sec": 0 00:16:13.254 }, 00:16:13.254 "claimed": false, 00:16:13.254 "zoned": false, 00:16:13.254 "supported_io_types": { 00:16:13.254 "read": true, 00:16:13.254 "write": true, 00:16:13.254 "unmap": true, 00:16:13.254 "flush": false, 00:16:13.254 "reset": true, 00:16:13.254 "nvme_admin": false, 00:16:13.254 "nvme_io": false, 00:16:13.254 "nvme_io_md": false, 00:16:13.254 "write_zeroes": true, 00:16:13.254 "zcopy": false, 00:16:13.254 "get_zone_info": false, 00:16:13.254 "zone_management": false, 00:16:13.254 "zone_append": false, 00:16:13.254 "compare": false, 00:16:13.254 "compare_and_write": false, 00:16:13.254 "abort": false, 00:16:13.254 "seek_hole": true, 00:16:13.254 "seek_data": true, 00:16:13.254 "copy": false, 00:16:13.254 "nvme_iov_md": false 00:16:13.254 }, 00:16:13.254 "driver_specific": { 00:16:13.254 "lvol": { 00:16:13.254 "lvol_store_uuid": "1cb1710d-d955-49cb-b2d7-50e84144e590", 00:16:13.254 "base_bdev": "nvme0n1", 00:16:13.254 "thin_provision": true, 00:16:13.254 "num_allocated_clusters": 0, 00:16:13.254 "snapshot": false, 00:16:13.254 "clone": false, 00:16:13.254 "esnap_clone": false 00:16:13.254 } 00:16:13.254 } 00:16:13.254 } 00:16:13.254 ]' 00:16:13.254 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:13.254 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:13.254 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:13.514 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:13.514 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:13.514 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:13.514 09:00:07 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:13.514 09:00:07 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:13.514 09:00:07 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:13.514 09:00:07 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:13.514 09:00:07 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:13.514 09:00:07 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 7cf5f69b-37d5-4fb9-85a9-4048cb3a6595 00:16:13.514 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=7cf5f69b-37d5-4fb9-85a9-4048cb3a6595 00:16:13.514 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:13.514 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:13.514 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:13.514 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7cf5f69b-37d5-4fb9-85a9-4048cb3a6595 00:16:13.775 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:13.775 { 00:16:13.775 "name": "7cf5f69b-37d5-4fb9-85a9-4048cb3a6595", 00:16:13.775 "aliases": [ 00:16:13.775 "lvs/nvme0n1p0" 00:16:13.775 ], 00:16:13.775 "product_name": "Logical Volume", 00:16:13.775 "block_size": 4096, 00:16:13.775 "num_blocks": 26476544, 00:16:13.775 "uuid": "7cf5f69b-37d5-4fb9-85a9-4048cb3a6595", 00:16:13.775 "assigned_rate_limits": { 00:16:13.775 "rw_ios_per_sec": 0, 00:16:13.775 "rw_mbytes_per_sec": 0, 00:16:13.775 "r_mbytes_per_sec": 0, 00:16:13.775 "w_mbytes_per_sec": 0 00:16:13.775 }, 00:16:13.775 "claimed": false, 00:16:13.775 "zoned": false, 00:16:13.775 "supported_io_types": { 00:16:13.775 "read": true, 00:16:13.775 "write": true, 00:16:13.775 "unmap": true, 00:16:13.775 "flush": false, 00:16:13.775 "reset": true, 00:16:13.775 "nvme_admin": false, 00:16:13.775 "nvme_io": false, 00:16:13.775 "nvme_io_md": false, 00:16:13.775 "write_zeroes": true, 00:16:13.775 "zcopy": false, 00:16:13.775 "get_zone_info": false, 00:16:13.775 "zone_management": false, 00:16:13.775 "zone_append": false, 00:16:13.775 "compare": false, 00:16:13.775 "compare_and_write": false, 00:16:13.775 "abort": false, 00:16:13.775 "seek_hole": true, 00:16:13.775 "seek_data": true, 00:16:13.775 "copy": false, 00:16:13.775 "nvme_iov_md": false 00:16:13.775 }, 00:16:13.775 "driver_specific": { 00:16:13.775 "lvol": { 00:16:13.775 "lvol_store_uuid": "1cb1710d-d955-49cb-b2d7-50e84144e590", 00:16:13.775 "base_bdev": "nvme0n1", 00:16:13.775 "thin_provision": true, 00:16:13.775 "num_allocated_clusters": 0, 00:16:13.775 "snapshot": false, 00:16:13.775 "clone": false, 00:16:13.775 "esnap_clone": false 00:16:13.775 } 00:16:13.775 } 00:16:13.775 } 00:16:13.775 ]' 00:16:13.775 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:13.775 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:13.775 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:13.775 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:13.775 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:13.775 09:00:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:13.775 09:00:07 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:13.775 09:00:07 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:14.037 09:00:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:14.037 09:00:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 7cf5f69b-37d5-4fb9-85a9-4048cb3a6595 00:16:14.037 09:00:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=7cf5f69b-37d5-4fb9-85a9-4048cb3a6595 00:16:14.037 09:00:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:14.037 09:00:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:14.037 09:00:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:14.037 09:00:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7cf5f69b-37d5-4fb9-85a9-4048cb3a6595 00:16:14.297 09:00:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:14.297 { 00:16:14.297 "name": "7cf5f69b-37d5-4fb9-85a9-4048cb3a6595", 00:16:14.297 "aliases": [ 00:16:14.297 "lvs/nvme0n1p0" 00:16:14.297 ], 00:16:14.297 "product_name": "Logical Volume", 00:16:14.297 "block_size": 4096, 00:16:14.297 "num_blocks": 26476544, 00:16:14.297 "uuid": "7cf5f69b-37d5-4fb9-85a9-4048cb3a6595", 00:16:14.297 "assigned_rate_limits": { 00:16:14.297 "rw_ios_per_sec": 0, 00:16:14.297 "rw_mbytes_per_sec": 0, 00:16:14.297 "r_mbytes_per_sec": 0, 00:16:14.297 "w_mbytes_per_sec": 0 00:16:14.297 }, 00:16:14.297 "claimed": false, 00:16:14.297 "zoned": false, 00:16:14.297 "supported_io_types": { 00:16:14.297 "read": true, 00:16:14.297 "write": true, 00:16:14.297 "unmap": true, 00:16:14.297 "flush": false, 00:16:14.297 "reset": true, 00:16:14.297 "nvme_admin": false, 00:16:14.297 "nvme_io": false, 00:16:14.297 "nvme_io_md": false, 00:16:14.297 "write_zeroes": true, 00:16:14.297 "zcopy": false, 00:16:14.297 "get_zone_info": false, 00:16:14.297 "zone_management": false, 00:16:14.297 "zone_append": false, 00:16:14.297 "compare": false, 00:16:14.297 "compare_and_write": false, 00:16:14.297 "abort": false, 00:16:14.297 "seek_hole": true, 00:16:14.297 "seek_data": true, 00:16:14.297 "copy": false, 00:16:14.297 "nvme_iov_md": false 00:16:14.297 }, 00:16:14.297 "driver_specific": { 00:16:14.297 "lvol": { 00:16:14.297 "lvol_store_uuid": "1cb1710d-d955-49cb-b2d7-50e84144e590", 00:16:14.297 "base_bdev": "nvme0n1", 00:16:14.297 "thin_provision": true, 00:16:14.297 "num_allocated_clusters": 0, 00:16:14.297 "snapshot": false, 00:16:14.297 "clone": false, 00:16:14.297 "esnap_clone": false 00:16:14.297 } 00:16:14.297 } 00:16:14.297 } 00:16:14.297 ]' 00:16:14.297 09:00:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:14.297 09:00:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:14.297 09:00:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:14.297 09:00:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:14.297 09:00:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:14.297 09:00:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:14.297 09:00:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:14.297 09:00:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7cf5f69b-37d5-4fb9-85a9-4048cb3a6595 -c nvc0n1p0 --l2p_dram_limit 20 00:16:14.556 [2024-11-28 09:00:08.559241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.556 [2024-11-28 09:00:08.559283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:14.556 [2024-11-28 09:00:08.559297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:14.556 [2024-11-28 09:00:08.559306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.556 [2024-11-28 09:00:08.559347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.556 [2024-11-28 09:00:08.559355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:14.556 [2024-11-28 09:00:08.559365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:14.556 [2024-11-28 09:00:08.559370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.556 [2024-11-28 09:00:08.559386] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:14.556 [2024-11-28 09:00:08.559576] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:14.556 [2024-11-28 09:00:08.559593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.556 [2024-11-28 09:00:08.559600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:14.556 [2024-11-28 09:00:08.559608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:16:14.556 [2024-11-28 09:00:08.559615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.556 [2024-11-28 09:00:08.559644] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 35016485-599f-4265-844c-e857fc12e2c4 00:16:14.556 [2024-11-28 09:00:08.560970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.556 [2024-11-28 09:00:08.560995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:14.556 [2024-11-28 09:00:08.561002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:14.556 [2024-11-28 09:00:08.561011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.556 [2024-11-28 09:00:08.567880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.556 [2024-11-28 09:00:08.567907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:14.556 [2024-11-28 09:00:08.567915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.836 ms 00:16:14.556 [2024-11-28 09:00:08.567926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.556 [2024-11-28 09:00:08.568015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.556 [2024-11-28 09:00:08.568025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:14.556 [2024-11-28 09:00:08.568032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:14.556 [2024-11-28 09:00:08.568040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.556 [2024-11-28 09:00:08.568085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.556 [2024-11-28 09:00:08.568096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:14.556 [2024-11-28 09:00:08.568105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:14.556 [2024-11-28 09:00:08.568113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.556 [2024-11-28 09:00:08.568132] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:14.556 [2024-11-28 09:00:08.569785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.556 [2024-11-28 09:00:08.569821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:14.556 [2024-11-28 09:00:08.569831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.656 ms 00:16:14.556 [2024-11-28 09:00:08.569837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.556 [2024-11-28 09:00:08.569866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.557 [2024-11-28 09:00:08.569873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:14.557 [2024-11-28 09:00:08.569887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:14.557 [2024-11-28 09:00:08.569893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.557 [2024-11-28 09:00:08.569909] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:14.557 [2024-11-28 09:00:08.570025] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:14.557 [2024-11-28 09:00:08.570037] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:14.557 [2024-11-28 09:00:08.570046] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:14.557 [2024-11-28 09:00:08.570056] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:14.557 [2024-11-28 09:00:08.570068] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:14.557 [2024-11-28 09:00:08.570079] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:14.557 [2024-11-28 09:00:08.570085] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:14.557 [2024-11-28 09:00:08.570092] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:14.557 [2024-11-28 09:00:08.570100] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:14.557 [2024-11-28 09:00:08.570108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.557 [2024-11-28 09:00:08.570115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:14.557 [2024-11-28 09:00:08.570125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:16:14.557 [2024-11-28 09:00:08.570131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.557 [2024-11-28 09:00:08.570199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.557 [2024-11-28 09:00:08.570207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:14.557 [2024-11-28 09:00:08.570217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:14.557 [2024-11-28 09:00:08.570225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.557 [2024-11-28 09:00:08.570297] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:14.557 [2024-11-28 09:00:08.570311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:14.557 [2024-11-28 09:00:08.570319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:14.557 [2024-11-28 09:00:08.570327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.557 [2024-11-28 09:00:08.570334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:14.557 [2024-11-28 09:00:08.570340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:14.557 [2024-11-28 09:00:08.570347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:14.557 [2024-11-28 09:00:08.570352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:14.557 [2024-11-28 09:00:08.570359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:14.557 [2024-11-28 09:00:08.570364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:14.557 [2024-11-28 09:00:08.570371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:14.557 [2024-11-28 09:00:08.570377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:14.557 [2024-11-28 09:00:08.570385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:14.557 [2024-11-28 09:00:08.570392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:14.557 [2024-11-28 09:00:08.570399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:14.557 [2024-11-28 09:00:08.570404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.557 [2024-11-28 09:00:08.570411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:14.557 [2024-11-28 09:00:08.570417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:14.557 [2024-11-28 09:00:08.570426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.557 [2024-11-28 09:00:08.570432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:14.557 [2024-11-28 09:00:08.570441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:14.557 [2024-11-28 09:00:08.570447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:14.557 [2024-11-28 09:00:08.570454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:14.557 [2024-11-28 09:00:08.570461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:14.557 [2024-11-28 09:00:08.570468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:14.557 [2024-11-28 09:00:08.570475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:14.557 [2024-11-28 09:00:08.570483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:14.557 [2024-11-28 09:00:08.570489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:14.557 [2024-11-28 09:00:08.570499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:14.557 [2024-11-28 09:00:08.570505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:14.557 [2024-11-28 09:00:08.570512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:14.557 [2024-11-28 09:00:08.570518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:14.557 [2024-11-28 09:00:08.570526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:14.557 [2024-11-28 09:00:08.570531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:14.557 [2024-11-28 09:00:08.570539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:14.557 [2024-11-28 09:00:08.570545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:14.557 [2024-11-28 09:00:08.570552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:14.557 [2024-11-28 09:00:08.570559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:14.557 [2024-11-28 09:00:08.570566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:14.557 [2024-11-28 09:00:08.570572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.557 [2024-11-28 09:00:08.570580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:14.557 [2024-11-28 09:00:08.570586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:14.557 [2024-11-28 09:00:08.570594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.557 [2024-11-28 09:00:08.570600] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:14.557 [2024-11-28 09:00:08.570613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:14.557 [2024-11-28 09:00:08.570619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:14.557 [2024-11-28 09:00:08.570627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:14.557 [2024-11-28 09:00:08.570637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:14.557 [2024-11-28 09:00:08.570646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:14.557 [2024-11-28 09:00:08.570652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:14.557 [2024-11-28 09:00:08.570661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:14.557 [2024-11-28 09:00:08.570669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:14.557 [2024-11-28 09:00:08.570677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:14.557 [2024-11-28 09:00:08.570687] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:14.557 [2024-11-28 09:00:08.570696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:14.557 [2024-11-28 09:00:08.570706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:14.557 [2024-11-28 09:00:08.570715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:14.557 [2024-11-28 09:00:08.570722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:14.557 [2024-11-28 09:00:08.570730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:14.557 [2024-11-28 09:00:08.570736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:14.557 [2024-11-28 09:00:08.570747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:14.557 [2024-11-28 09:00:08.570753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:14.557 [2024-11-28 09:00:08.570760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:14.557 [2024-11-28 09:00:08.570766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:14.557 [2024-11-28 09:00:08.570774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:14.557 [2024-11-28 09:00:08.570781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:14.557 [2024-11-28 09:00:08.570789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:14.557 [2024-11-28 09:00:08.570846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:14.557 [2024-11-28 09:00:08.570858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:14.557 [2024-11-28 09:00:08.570863] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:14.557 [2024-11-28 09:00:08.570872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:14.557 [2024-11-28 09:00:08.570878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:14.557 [2024-11-28 09:00:08.570887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:14.557 [2024-11-28 09:00:08.570893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:14.557 [2024-11-28 09:00:08.570901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:14.557 [2024-11-28 09:00:08.570908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.557 [2024-11-28 09:00:08.570920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:14.558 [2024-11-28 09:00:08.570928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:16:14.558 [2024-11-28 09:00:08.570935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.558 [2024-11-28 09:00:08.570962] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:14.558 [2024-11-28 09:00:08.570972] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:18.757 [2024-11-28 09:00:12.591040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.757 [2024-11-28 09:00:12.591156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:18.757 [2024-11-28 09:00:12.591177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4020.045 ms 00:16:18.757 [2024-11-28 09:00:12.591190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.757 [2024-11-28 09:00:12.620109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.757 [2024-11-28 09:00:12.620189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:18.757 [2024-11-28 09:00:12.620208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.777 ms 00:16:18.757 [2024-11-28 09:00:12.620224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.757 [2024-11-28 09:00:12.620366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.757 [2024-11-28 09:00:12.620383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:18.757 [2024-11-28 09:00:12.620394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:18.757 [2024-11-28 09:00:12.620405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.757 [2024-11-28 09:00:12.637235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.757 [2024-11-28 09:00:12.637297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:18.757 [2024-11-28 09:00:12.637312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.748 ms 00:16:18.757 [2024-11-28 09:00:12.637325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.757 [2024-11-28 09:00:12.637363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.757 [2024-11-28 09:00:12.637376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:18.757 [2024-11-28 09:00:12.637386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:18.757 [2024-11-28 09:00:12.637399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.757 [2024-11-28 09:00:12.638164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.757 [2024-11-28 09:00:12.638208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:18.757 [2024-11-28 09:00:12.638221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:16:18.757 [2024-11-28 09:00:12.638243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.757 [2024-11-28 09:00:12.638373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.757 [2024-11-28 09:00:12.638388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:18.757 [2024-11-28 09:00:12.638402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:16:18.757 [2024-11-28 09:00:12.638414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.757 [2024-11-28 09:00:12.648144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.757 [2024-11-28 09:00:12.648468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:18.757 [2024-11-28 09:00:12.648490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.709 ms 00:16:18.757 [2024-11-28 09:00:12.648503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.757 [2024-11-28 09:00:12.660042] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:18.757 [2024-11-28 09:00:12.669361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.757 [2024-11-28 09:00:12.669413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:18.757 [2024-11-28 09:00:12.669428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.747 ms 00:16:18.757 [2024-11-28 09:00:12.669437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.758 [2024-11-28 09:00:12.759239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.758 [2024-11-28 09:00:12.759477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:18.758 [2024-11-28 09:00:12.759509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.764 ms 00:16:18.758 [2024-11-28 09:00:12.759519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.758 [2024-11-28 09:00:12.759739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.758 [2024-11-28 09:00:12.759758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:18.758 [2024-11-28 09:00:12.759771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:16:18.758 [2024-11-28 09:00:12.759780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.758 [2024-11-28 09:00:12.766377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.758 [2024-11-28 09:00:12.766576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:18.758 [2024-11-28 09:00:12.766602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.513 ms 00:16:18.758 [2024-11-28 09:00:12.766613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.758 [2024-11-28 09:00:12.772027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.758 [2024-11-28 09:00:12.772076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:18.758 [2024-11-28 09:00:12.772091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.296 ms 00:16:18.758 [2024-11-28 09:00:12.772098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.758 [2024-11-28 09:00:12.772473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.758 [2024-11-28 09:00:12.772487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:18.758 [2024-11-28 09:00:12.772508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:16:18.758 [2024-11-28 09:00:12.772516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.758 [2024-11-28 09:00:12.823157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.758 [2024-11-28 09:00:12.823335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:18.758 [2024-11-28 09:00:12.823415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.614 ms 00:16:18.758 [2024-11-28 09:00:12.823442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.758 [2024-11-28 09:00:12.831365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.758 [2024-11-28 09:00:12.831535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:18.758 [2024-11-28 09:00:12.831728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.841 ms 00:16:18.758 [2024-11-28 09:00:12.831772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.758 [2024-11-28 09:00:12.837793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.758 [2024-11-28 09:00:12.837975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:18.758 [2024-11-28 09:00:12.838035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.941 ms 00:16:18.758 [2024-11-28 09:00:12.838047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.758 [2024-11-28 09:00:12.845141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.758 [2024-11-28 09:00:12.845325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:18.758 [2024-11-28 09:00:12.845354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.980 ms 00:16:18.758 [2024-11-28 09:00:12.845362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.758 [2024-11-28 09:00:12.845440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.758 [2024-11-28 09:00:12.845452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:18.758 [2024-11-28 09:00:12.845468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:18.758 [2024-11-28 09:00:12.845482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.758 [2024-11-28 09:00:12.845588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.758 [2024-11-28 09:00:12.845600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:18.758 [2024-11-28 09:00:12.845612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:18.758 [2024-11-28 09:00:12.845620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.758 [2024-11-28 09:00:12.847045] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4287.176 ms, result 0 00:16:18.758 { 00:16:18.758 "name": "ftl0", 00:16:18.758 "uuid": "35016485-599f-4265-844c-e857fc12e2c4" 00:16:18.758 } 00:16:18.758 09:00:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:18.758 09:00:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:18.758 09:00:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:19.020 09:00:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:19.282 [2024-11-28 09:00:13.185097] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:19.282 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:19.282 Zero copy mechanism will not be used. 00:16:19.282 Running I/O for 4 seconds... 00:16:21.172 974.00 IOPS, 64.68 MiB/s [2024-11-28T09:00:16.231Z] 878.50 IOPS, 58.34 MiB/s [2024-11-28T09:00:17.616Z] 853.67 IOPS, 56.69 MiB/s [2024-11-28T09:00:17.616Z] 860.75 IOPS, 57.16 MiB/s 00:16:23.496 Latency(us) 00:16:23.496 [2024-11-28T09:00:17.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.496 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:23.496 ftl0 : 4.00 860.61 57.15 0.00 0.00 1234.34 185.90 2999.53 00:16:23.496 [2024-11-28T09:00:17.616Z] =================================================================================================================== 00:16:23.496 [2024-11-28T09:00:17.616Z] Total : 860.61 57.15 0.00 0.00 1234.34 185.90 2999.53 00:16:23.496 { 00:16:23.496 "results": [ 00:16:23.496 { 00:16:23.496 "job": "ftl0", 00:16:23.496 "core_mask": "0x1", 00:16:23.496 "workload": "randwrite", 00:16:23.496 "status": "finished", 00:16:23.496 "queue_depth": 1, 00:16:23.496 "io_size": 69632, 00:16:23.496 "runtime": 4.001796, 00:16:23.496 "iops": 860.6135845005592, 00:16:23.496 "mibps": 57.15012084574026, 00:16:23.496 "io_failed": 0, 00:16:23.496 "io_timeout": 0, 00:16:23.496 "avg_latency_us": 1234.3423210935407, 00:16:23.496 "min_latency_us": 185.89538461538461, 00:16:23.496 "max_latency_us": 2999.5323076923078 00:16:23.496 } 00:16:23.496 ], 00:16:23.496 "core_count": 1 00:16:23.496 } 00:16:23.496 [2024-11-28 09:00:17.194140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:23.496 09:00:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:23.496 [2024-11-28 09:00:17.307029] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:23.496 Running I/O for 4 seconds... 00:16:25.384 5968.00 IOPS, 23.31 MiB/s [2024-11-28T09:00:20.449Z] 5867.00 IOPS, 22.92 MiB/s [2024-11-28T09:00:21.395Z] 5476.33 IOPS, 21.39 MiB/s [2024-11-28T09:00:21.395Z] 5450.50 IOPS, 21.29 MiB/s 00:16:27.275 Latency(us) 00:16:27.275 [2024-11-28T09:00:21.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.275 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:27.275 ftl0 : 4.03 5435.80 21.23 0.00 0.00 23451.44 351.31 48395.82 00:16:27.275 [2024-11-28T09:00:21.395Z] =================================================================================================================== 00:16:27.275 [2024-11-28T09:00:21.395Z] Total : 5435.80 21.23 0.00 0.00 23451.44 0.00 48395.82 00:16:27.275 [2024-11-28 09:00:21.349004] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:27.275 { 00:16:27.275 "results": [ 00:16:27.275 { 00:16:27.275 "job": "ftl0", 00:16:27.275 "core_mask": "0x1", 00:16:27.275 "workload": "randwrite", 00:16:27.275 "status": "finished", 00:16:27.275 "queue_depth": 128, 00:16:27.275 "io_size": 4096, 00:16:27.275 "runtime": 4.034, 00:16:27.275 "iops": 5435.795736241944, 00:16:27.275 "mibps": 21.233577094695093, 00:16:27.275 "io_failed": 0, 00:16:27.275 "io_timeout": 0, 00:16:27.275 "avg_latency_us": 23451.440271658295, 00:16:27.275 "min_latency_us": 351.3107692307692, 00:16:27.275 "max_latency_us": 48395.81538461539 00:16:27.275 } 00:16:27.275 ], 00:16:27.275 "core_count": 1 00:16:27.275 } 00:16:27.275 09:00:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:27.537 [2024-11-28 09:00:21.468122] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:27.537 Running I/O for 4 seconds... 00:16:29.426 5170.00 IOPS, 20.20 MiB/s [2024-11-28T09:00:24.514Z] 5560.00 IOPS, 21.72 MiB/s [2024-11-28T09:00:25.899Z] 5483.00 IOPS, 21.42 MiB/s [2024-11-28T09:00:25.899Z] 5381.00 IOPS, 21.02 MiB/s 00:16:31.779 Latency(us) 00:16:31.779 [2024-11-28T09:00:25.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.779 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:31.779 Verification LBA range: start 0x0 length 0x1400000 00:16:31.779 ftl0 : 4.02 5389.85 21.05 0.00 0.00 23667.14 263.09 41539.74 00:16:31.779 [2024-11-28T09:00:25.899Z] =================================================================================================================== 00:16:31.779 [2024-11-28T09:00:25.899Z] Total : 5389.85 21.05 0.00 0.00 23667.14 0.00 41539.74 00:16:31.779 { 00:16:31.779 "results": [ 00:16:31.779 { 00:16:31.779 "job": "ftl0", 00:16:31.779 "core_mask": "0x1", 00:16:31.779 "workload": "verify", 00:16:31.779 "status": "finished", 00:16:31.779 "verify_range": { 00:16:31.779 "start": 0, 00:16:31.779 "length": 20971520 00:16:31.779 }, 00:16:31.779 "queue_depth": 128, 00:16:31.779 "io_size": 4096, 00:16:31.779 "runtime": 4.016065, 00:16:31.779 "iops": 5389.853002877195, 00:16:31.779 "mibps": 21.054113292489042, 00:16:31.779 "io_failed": 0, 00:16:31.779 "io_timeout": 0, 00:16:31.779 "avg_latency_us": 23667.143868826362, 00:16:31.779 "min_latency_us": 263.08923076923077, 00:16:31.779 "max_latency_us": 41539.74153846154 00:16:31.779 } 00:16:31.779 ], 00:16:31.779 "core_count": 1 00:16:31.779 } 00:16:31.779 [2024-11-28 09:00:25.493929] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:31.779 09:00:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:31.779 [2024-11-28 09:00:25.710335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.779 [2024-11-28 09:00:25.710395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:31.779 [2024-11-28 09:00:25.710416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:31.779 [2024-11-28 09:00:25.710426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.779 [2024-11-28 09:00:25.710455] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:31.779 [2024-11-28 09:00:25.711448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.779 [2024-11-28 09:00:25.711508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:31.779 [2024-11-28 09:00:25.711521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:16:31.779 [2024-11-28 09:00:25.711541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.779 [2024-11-28 09:00:25.714610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.779 [2024-11-28 09:00:25.714872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:31.779 [2024-11-28 09:00:25.714895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.042 ms 00:16:31.779 [2024-11-28 09:00:25.714910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.042 [2024-11-28 09:00:25.939460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.042 [2024-11-28 09:00:25.939530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:32.042 [2024-11-28 09:00:25.939544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 224.524 ms 00:16:32.042 [2024-11-28 09:00:25.939557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.042 [2024-11-28 09:00:25.945778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.042 [2024-11-28 09:00:25.945839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:32.042 [2024-11-28 09:00:25.945852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.171 ms 00:16:32.042 [2024-11-28 09:00:25.945864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.042 [2024-11-28 09:00:25.948816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.042 [2024-11-28 09:00:25.948869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:32.042 [2024-11-28 09:00:25.948881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.892 ms 00:16:32.042 [2024-11-28 09:00:25.948893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.042 [2024-11-28 09:00:25.955673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.042 [2024-11-28 09:00:25.955733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:32.042 [2024-11-28 09:00:25.955752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.733 ms 00:16:32.042 [2024-11-28 09:00:25.955772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.042 [2024-11-28 09:00:25.955943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.042 [2024-11-28 09:00:25.955973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:32.042 [2024-11-28 09:00:25.955982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:16:32.043 [2024-11-28 09:00:25.955994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.043 [2024-11-28 09:00:25.959879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.043 [2024-11-28 09:00:25.959953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:32.043 [2024-11-28 09:00:25.959966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.866 ms 00:16:32.043 [2024-11-28 09:00:25.959978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.043 [2024-11-28 09:00:25.962786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.043 [2024-11-28 09:00:25.962886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:32.043 [2024-11-28 09:00:25.962897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.757 ms 00:16:32.043 [2024-11-28 09:00:25.962907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.043 [2024-11-28 09:00:25.965152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.043 [2024-11-28 09:00:25.965210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:32.043 [2024-11-28 09:00:25.965222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.198 ms 00:16:32.043 [2024-11-28 09:00:25.965236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.043 [2024-11-28 09:00:25.967451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.043 [2024-11-28 09:00:25.967506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:32.043 [2024-11-28 09:00:25.967517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:16:32.043 [2024-11-28 09:00:25.967529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.043 [2024-11-28 09:00:25.967574] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:32.043 [2024-11-28 09:00:25.967596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.967992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:32.043 [2024-11-28 09:00:25.968370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:32.044 [2024-11-28 09:00:25.968616] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:32.044 [2024-11-28 09:00:25.968624] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 35016485-599f-4265-844c-e857fc12e2c4 00:16:32.044 [2024-11-28 09:00:25.968635] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:32.044 [2024-11-28 09:00:25.968647] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:32.044 [2024-11-28 09:00:25.968657] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:32.044 [2024-11-28 09:00:25.968666] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:32.044 [2024-11-28 09:00:25.968679] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:32.044 [2024-11-28 09:00:25.968687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:32.044 [2024-11-28 09:00:25.968697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:32.044 [2024-11-28 09:00:25.968705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:32.044 [2024-11-28 09:00:25.968714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:32.044 [2024-11-28 09:00:25.968722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.044 [2024-11-28 09:00:25.968732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:32.044 [2024-11-28 09:00:25.968741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:16:32.044 [2024-11-28 09:00:25.968773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:25.971752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.044 [2024-11-28 09:00:25.971820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:32.044 [2024-11-28 09:00:25.971833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.960 ms 00:16:32.044 [2024-11-28 09:00:25.971844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:25.972016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.044 [2024-11-28 09:00:25.972029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:32.044 [2024-11-28 09:00:25.972040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:16:32.044 [2024-11-28 09:00:25.972053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:25.981626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.044 [2024-11-28 09:00:25.981683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:32.044 [2024-11-28 09:00:25.981695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.044 [2024-11-28 09:00:25.981708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:25.981775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.044 [2024-11-28 09:00:25.981787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:32.044 [2024-11-28 09:00:25.981870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.044 [2024-11-28 09:00:25.981881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:25.981963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.044 [2024-11-28 09:00:25.982003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:32.044 [2024-11-28 09:00:25.982013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.044 [2024-11-28 09:00:25.982025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:25.982045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.044 [2024-11-28 09:00:25.982059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:32.044 [2024-11-28 09:00:25.982068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.044 [2024-11-28 09:00:25.982084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:26.001717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.044 [2024-11-28 09:00:26.002083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:32.044 [2024-11-28 09:00:26.002106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.044 [2024-11-28 09:00:26.002126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:26.018740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.044 [2024-11-28 09:00:26.019015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:32.044 [2024-11-28 09:00:26.019233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.044 [2024-11-28 09:00:26.019263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:26.019381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.044 [2024-11-28 09:00:26.019426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:32.044 [2024-11-28 09:00:26.019449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.044 [2024-11-28 09:00:26.019471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:26.019533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.044 [2024-11-28 09:00:26.019559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:32.044 [2024-11-28 09:00:26.019582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.044 [2024-11-28 09:00:26.019740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:26.019880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.044 [2024-11-28 09:00:26.019917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:32.044 [2024-11-28 09:00:26.019941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.044 [2024-11-28 09:00:26.019965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:26.020017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.044 [2024-11-28 09:00:26.020045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:32.044 [2024-11-28 09:00:26.020068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.044 [2024-11-28 09:00:26.020091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:26.020159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.044 [2024-11-28 09:00:26.020283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:32.044 [2024-11-28 09:00:26.020307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.044 [2024-11-28 09:00:26.020329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:26.020406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.044 [2024-11-28 09:00:26.020435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:32.044 [2024-11-28 09:00:26.020457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.044 [2024-11-28 09:00:26.020484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.044 [2024-11-28 09:00:26.020682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 310.281 ms, result 0 00:16:32.044 true 00:16:32.044 09:00:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 85216 00:16:32.044 09:00:26 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 85216 ']' 00:16:32.044 09:00:26 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 85216 00:16:32.044 09:00:26 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:32.044 09:00:26 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:32.044 09:00:26 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85216 00:16:32.044 killing process with pid 85216 00:16:32.044 Received shutdown signal, test time was about 4.000000 seconds 00:16:32.044 00:16:32.044 Latency(us) 00:16:32.045 [2024-11-28T09:00:26.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.045 [2024-11-28T09:00:26.165Z] =================================================================================================================== 00:16:32.045 [2024-11-28T09:00:26.165Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:32.045 09:00:26 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:32.045 09:00:26 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:32.045 09:00:26 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85216' 00:16:32.045 09:00:26 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 85216 00:16:32.045 09:00:26 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 85216 00:16:32.617 09:00:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:32.617 09:00:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:32.617 Remove shared memory files 00:16:32.617 09:00:26 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:32.617 09:00:26 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:32.617 09:00:26 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:32.617 09:00:26 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:32.617 09:00:26 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:32.617 09:00:26 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:32.617 ************************************ 00:16:32.617 END TEST ftl_bdevperf 00:16:32.617 ************************************ 00:16:32.617 00:16:32.617 real 0m21.908s 00:16:32.617 user 0m24.446s 00:16:32.617 sys 0m1.006s 00:16:32.617 09:00:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:32.617 09:00:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:32.617 09:00:26 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:32.617 09:00:26 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:32.617 09:00:26 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:32.617 09:00:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:32.617 ************************************ 00:16:32.617 START TEST ftl_trim 00:16:32.617 ************************************ 00:16:32.617 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:32.617 * Looking for test storage... 00:16:32.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:32.617 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:32.617 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:32.617 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:32.617 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:32.617 09:00:26 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:32.617 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:32.617 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:32.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.617 --rc genhtml_branch_coverage=1 00:16:32.617 --rc genhtml_function_coverage=1 00:16:32.617 --rc genhtml_legend=1 00:16:32.617 --rc geninfo_all_blocks=1 00:16:32.617 --rc geninfo_unexecuted_blocks=1 00:16:32.617 00:16:32.617 ' 00:16:32.617 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:32.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.617 --rc genhtml_branch_coverage=1 00:16:32.617 --rc genhtml_function_coverage=1 00:16:32.617 --rc genhtml_legend=1 00:16:32.617 --rc geninfo_all_blocks=1 00:16:32.617 --rc geninfo_unexecuted_blocks=1 00:16:32.617 00:16:32.617 ' 00:16:32.617 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:32.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.617 --rc genhtml_branch_coverage=1 00:16:32.617 --rc genhtml_function_coverage=1 00:16:32.617 --rc genhtml_legend=1 00:16:32.617 --rc geninfo_all_blocks=1 00:16:32.617 --rc geninfo_unexecuted_blocks=1 00:16:32.617 00:16:32.618 ' 00:16:32.618 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:32.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.618 --rc genhtml_branch_coverage=1 00:16:32.618 --rc genhtml_function_coverage=1 00:16:32.618 --rc genhtml_legend=1 00:16:32.618 --rc geninfo_all_blocks=1 00:16:32.618 --rc geninfo_unexecuted_blocks=1 00:16:32.618 00:16:32.618 ' 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=85570 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:32.618 09:00:26 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 85570 00:16:32.618 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85570 ']' 00:16:32.618 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.618 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.618 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.618 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.618 09:00:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:32.880 [2024-11-28 09:00:26.807461] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:32.880 [2024-11-28 09:00:26.807603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85570 ] 00:16:32.880 [2024-11-28 09:00:26.959619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:33.141 [2024-11-28 09:00:27.035440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.141 [2024-11-28 09:00:27.035871] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.141 [2024-11-28 09:00:27.035902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.713 09:00:27 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:33.713 09:00:27 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:33.713 09:00:27 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:33.713 09:00:27 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:33.713 09:00:27 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:33.713 09:00:27 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:33.713 09:00:27 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:33.713 09:00:27 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:33.975 09:00:27 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:33.975 09:00:27 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:33.975 09:00:27 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:33.975 09:00:27 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:33.975 09:00:27 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:33.975 09:00:27 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:33.975 09:00:27 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:33.975 09:00:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:34.237 09:00:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:34.237 { 00:16:34.237 "name": "nvme0n1", 00:16:34.237 "aliases": [ 00:16:34.237 "ad9b3102-037e-4e6b-a9cd-40ca05588f1e" 00:16:34.237 ], 00:16:34.237 "product_name": "NVMe disk", 00:16:34.237 "block_size": 4096, 00:16:34.237 "num_blocks": 1310720, 00:16:34.237 "uuid": "ad9b3102-037e-4e6b-a9cd-40ca05588f1e", 00:16:34.237 "numa_id": -1, 00:16:34.237 "assigned_rate_limits": { 00:16:34.237 "rw_ios_per_sec": 0, 00:16:34.237 "rw_mbytes_per_sec": 0, 00:16:34.237 "r_mbytes_per_sec": 0, 00:16:34.237 "w_mbytes_per_sec": 0 00:16:34.237 }, 00:16:34.237 "claimed": true, 00:16:34.237 "claim_type": "read_many_write_one", 00:16:34.237 "zoned": false, 00:16:34.237 "supported_io_types": { 00:16:34.237 "read": true, 00:16:34.237 "write": true, 00:16:34.237 "unmap": true, 00:16:34.237 "flush": true, 00:16:34.237 "reset": true, 00:16:34.237 "nvme_admin": true, 00:16:34.237 "nvme_io": true, 00:16:34.237 "nvme_io_md": false, 00:16:34.237 "write_zeroes": true, 00:16:34.237 "zcopy": false, 00:16:34.237 "get_zone_info": false, 00:16:34.237 "zone_management": false, 00:16:34.237 "zone_append": false, 00:16:34.237 "compare": true, 00:16:34.237 "compare_and_write": false, 00:16:34.237 "abort": true, 00:16:34.237 "seek_hole": false, 00:16:34.237 "seek_data": false, 00:16:34.237 "copy": true, 00:16:34.237 "nvme_iov_md": false 00:16:34.237 }, 00:16:34.237 "driver_specific": { 00:16:34.237 "nvme": [ 00:16:34.237 { 00:16:34.237 "pci_address": "0000:00:11.0", 00:16:34.237 "trid": { 00:16:34.237 "trtype": "PCIe", 00:16:34.237 "traddr": "0000:00:11.0" 00:16:34.237 }, 00:16:34.237 "ctrlr_data": { 00:16:34.237 "cntlid": 0, 00:16:34.237 "vendor_id": "0x1b36", 00:16:34.237 "model_number": "QEMU NVMe Ctrl", 00:16:34.237 "serial_number": "12341", 00:16:34.237 "firmware_revision": "8.0.0", 00:16:34.237 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:34.237 "oacs": { 00:16:34.237 "security": 0, 00:16:34.237 "format": 1, 00:16:34.237 "firmware": 0, 00:16:34.237 "ns_manage": 1 00:16:34.237 }, 00:16:34.237 "multi_ctrlr": false, 00:16:34.237 "ana_reporting": false 00:16:34.237 }, 00:16:34.237 "vs": { 00:16:34.237 "nvme_version": "1.4" 00:16:34.237 }, 00:16:34.237 "ns_data": { 00:16:34.237 "id": 1, 00:16:34.237 "can_share": false 00:16:34.237 } 00:16:34.237 } 00:16:34.237 ], 00:16:34.237 "mp_policy": "active_passive" 00:16:34.237 } 00:16:34.237 } 00:16:34.237 ]' 00:16:34.237 09:00:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:34.237 09:00:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:34.237 09:00:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:34.237 09:00:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:34.237 09:00:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:34.237 09:00:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:34.237 09:00:28 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:34.237 09:00:28 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:34.237 09:00:28 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:34.237 09:00:28 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:34.237 09:00:28 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:34.497 09:00:28 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=1cb1710d-d955-49cb-b2d7-50e84144e590 00:16:34.497 09:00:28 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:34.497 09:00:28 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1cb1710d-d955-49cb-b2d7-50e84144e590 00:16:34.757 09:00:28 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:35.016 09:00:28 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=add5ba8c-dfae-4d00-9e39-afc542ed44ff 00:16:35.016 09:00:28 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u add5ba8c-dfae-4d00-9e39-afc542ed44ff 00:16:35.016 09:00:29 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=a22ca64d-4417-43d1-9743-df8d50ab15c9 00:16:35.016 09:00:29 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a22ca64d-4417-43d1-9743-df8d50ab15c9 00:16:35.016 09:00:29 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:35.016 09:00:29 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:35.016 09:00:29 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=a22ca64d-4417-43d1-9743-df8d50ab15c9 00:16:35.016 09:00:29 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:35.016 09:00:29 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size a22ca64d-4417-43d1-9743-df8d50ab15c9 00:16:35.016 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=a22ca64d-4417-43d1-9743-df8d50ab15c9 00:16:35.016 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:35.016 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:35.016 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:35.016 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a22ca64d-4417-43d1-9743-df8d50ab15c9 00:16:35.275 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:35.275 { 00:16:35.275 "name": "a22ca64d-4417-43d1-9743-df8d50ab15c9", 00:16:35.275 "aliases": [ 00:16:35.275 "lvs/nvme0n1p0" 00:16:35.275 ], 00:16:35.275 "product_name": "Logical Volume", 00:16:35.275 "block_size": 4096, 00:16:35.275 "num_blocks": 26476544, 00:16:35.275 "uuid": "a22ca64d-4417-43d1-9743-df8d50ab15c9", 00:16:35.275 "assigned_rate_limits": { 00:16:35.275 "rw_ios_per_sec": 0, 00:16:35.275 "rw_mbytes_per_sec": 0, 00:16:35.275 "r_mbytes_per_sec": 0, 00:16:35.275 "w_mbytes_per_sec": 0 00:16:35.275 }, 00:16:35.275 "claimed": false, 00:16:35.275 "zoned": false, 00:16:35.275 "supported_io_types": { 00:16:35.275 "read": true, 00:16:35.275 "write": true, 00:16:35.275 "unmap": true, 00:16:35.275 "flush": false, 00:16:35.275 "reset": true, 00:16:35.275 "nvme_admin": false, 00:16:35.275 "nvme_io": false, 00:16:35.275 "nvme_io_md": false, 00:16:35.275 "write_zeroes": true, 00:16:35.275 "zcopy": false, 00:16:35.275 "get_zone_info": false, 00:16:35.275 "zone_management": false, 00:16:35.275 "zone_append": false, 00:16:35.275 "compare": false, 00:16:35.275 "compare_and_write": false, 00:16:35.275 "abort": false, 00:16:35.275 "seek_hole": true, 00:16:35.275 "seek_data": true, 00:16:35.275 "copy": false, 00:16:35.275 "nvme_iov_md": false 00:16:35.275 }, 00:16:35.275 "driver_specific": { 00:16:35.275 "lvol": { 00:16:35.275 "lvol_store_uuid": "add5ba8c-dfae-4d00-9e39-afc542ed44ff", 00:16:35.275 "base_bdev": "nvme0n1", 00:16:35.275 "thin_provision": true, 00:16:35.275 "num_allocated_clusters": 0, 00:16:35.275 "snapshot": false, 00:16:35.275 "clone": false, 00:16:35.275 "esnap_clone": false 00:16:35.275 } 00:16:35.275 } 00:16:35.275 } 00:16:35.275 ]' 00:16:35.275 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:35.275 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:35.275 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:35.534 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:35.534 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:35.534 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:35.534 09:00:29 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:35.534 09:00:29 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:35.534 09:00:29 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:35.534 09:00:29 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:35.534 09:00:29 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:35.793 09:00:29 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size a22ca64d-4417-43d1-9743-df8d50ab15c9 00:16:35.793 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=a22ca64d-4417-43d1-9743-df8d50ab15c9 00:16:35.793 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:35.793 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:35.793 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:35.793 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a22ca64d-4417-43d1-9743-df8d50ab15c9 00:16:35.793 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:35.793 { 00:16:35.793 "name": "a22ca64d-4417-43d1-9743-df8d50ab15c9", 00:16:35.793 "aliases": [ 00:16:35.793 "lvs/nvme0n1p0" 00:16:35.793 ], 00:16:35.793 "product_name": "Logical Volume", 00:16:35.793 "block_size": 4096, 00:16:35.793 "num_blocks": 26476544, 00:16:35.793 "uuid": "a22ca64d-4417-43d1-9743-df8d50ab15c9", 00:16:35.793 "assigned_rate_limits": { 00:16:35.793 "rw_ios_per_sec": 0, 00:16:35.793 "rw_mbytes_per_sec": 0, 00:16:35.793 "r_mbytes_per_sec": 0, 00:16:35.793 "w_mbytes_per_sec": 0 00:16:35.793 }, 00:16:35.793 "claimed": false, 00:16:35.793 "zoned": false, 00:16:35.793 "supported_io_types": { 00:16:35.793 "read": true, 00:16:35.793 "write": true, 00:16:35.793 "unmap": true, 00:16:35.793 "flush": false, 00:16:35.793 "reset": true, 00:16:35.793 "nvme_admin": false, 00:16:35.793 "nvme_io": false, 00:16:35.793 "nvme_io_md": false, 00:16:35.793 "write_zeroes": true, 00:16:35.793 "zcopy": false, 00:16:35.793 "get_zone_info": false, 00:16:35.793 "zone_management": false, 00:16:35.793 "zone_append": false, 00:16:35.793 "compare": false, 00:16:35.793 "compare_and_write": false, 00:16:35.793 "abort": false, 00:16:35.793 "seek_hole": true, 00:16:35.793 "seek_data": true, 00:16:35.793 "copy": false, 00:16:35.793 "nvme_iov_md": false 00:16:35.793 }, 00:16:35.793 "driver_specific": { 00:16:35.793 "lvol": { 00:16:35.793 "lvol_store_uuid": "add5ba8c-dfae-4d00-9e39-afc542ed44ff", 00:16:35.793 "base_bdev": "nvme0n1", 00:16:35.793 "thin_provision": true, 00:16:35.793 "num_allocated_clusters": 0, 00:16:35.793 "snapshot": false, 00:16:35.793 "clone": false, 00:16:35.793 "esnap_clone": false 00:16:35.793 } 00:16:35.793 } 00:16:35.793 } 00:16:35.793 ]' 00:16:35.793 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:35.793 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:35.793 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:35.793 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:35.794 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:35.794 09:00:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:35.794 09:00:29 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:35.794 09:00:29 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:36.052 09:00:30 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:36.052 09:00:30 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:36.052 09:00:30 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size a22ca64d-4417-43d1-9743-df8d50ab15c9 00:16:36.052 09:00:30 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=a22ca64d-4417-43d1-9743-df8d50ab15c9 00:16:36.052 09:00:30 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:36.052 09:00:30 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:36.052 09:00:30 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:36.052 09:00:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a22ca64d-4417-43d1-9743-df8d50ab15c9 00:16:36.310 09:00:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:36.310 { 00:16:36.310 "name": "a22ca64d-4417-43d1-9743-df8d50ab15c9", 00:16:36.310 "aliases": [ 00:16:36.310 "lvs/nvme0n1p0" 00:16:36.310 ], 00:16:36.310 "product_name": "Logical Volume", 00:16:36.310 "block_size": 4096, 00:16:36.310 "num_blocks": 26476544, 00:16:36.310 "uuid": "a22ca64d-4417-43d1-9743-df8d50ab15c9", 00:16:36.310 "assigned_rate_limits": { 00:16:36.310 "rw_ios_per_sec": 0, 00:16:36.310 "rw_mbytes_per_sec": 0, 00:16:36.310 "r_mbytes_per_sec": 0, 00:16:36.310 "w_mbytes_per_sec": 0 00:16:36.310 }, 00:16:36.310 "claimed": false, 00:16:36.310 "zoned": false, 00:16:36.310 "supported_io_types": { 00:16:36.310 "read": true, 00:16:36.310 "write": true, 00:16:36.310 "unmap": true, 00:16:36.310 "flush": false, 00:16:36.310 "reset": true, 00:16:36.310 "nvme_admin": false, 00:16:36.310 "nvme_io": false, 00:16:36.310 "nvme_io_md": false, 00:16:36.310 "write_zeroes": true, 00:16:36.310 "zcopy": false, 00:16:36.310 "get_zone_info": false, 00:16:36.310 "zone_management": false, 00:16:36.310 "zone_append": false, 00:16:36.310 "compare": false, 00:16:36.310 "compare_and_write": false, 00:16:36.310 "abort": false, 00:16:36.310 "seek_hole": true, 00:16:36.310 "seek_data": true, 00:16:36.310 "copy": false, 00:16:36.310 "nvme_iov_md": false 00:16:36.310 }, 00:16:36.310 "driver_specific": { 00:16:36.310 "lvol": { 00:16:36.310 "lvol_store_uuid": "add5ba8c-dfae-4d00-9e39-afc542ed44ff", 00:16:36.310 "base_bdev": "nvme0n1", 00:16:36.310 "thin_provision": true, 00:16:36.310 "num_allocated_clusters": 0, 00:16:36.310 "snapshot": false, 00:16:36.310 "clone": false, 00:16:36.310 "esnap_clone": false 00:16:36.310 } 00:16:36.310 } 00:16:36.310 } 00:16:36.310 ]' 00:16:36.310 09:00:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:36.310 09:00:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:36.310 09:00:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:36.310 09:00:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:36.310 09:00:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:36.310 09:00:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:36.310 09:00:30 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:36.310 09:00:30 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a22ca64d-4417-43d1-9743-df8d50ab15c9 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:36.570 [2024-11-28 09:00:30.554509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.570 [2024-11-28 09:00:30.554558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:36.570 [2024-11-28 09:00:30.554572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:36.570 [2024-11-28 09:00:30.554592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.570 [2024-11-28 09:00:30.557103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.570 [2024-11-28 09:00:30.557139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:36.570 [2024-11-28 09:00:30.557150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.480 ms 00:16:36.570 [2024-11-28 09:00:30.557161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.570 [2024-11-28 09:00:30.557261] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:36.570 [2024-11-28 09:00:30.557541] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:36.570 [2024-11-28 09:00:30.557566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.570 [2024-11-28 09:00:30.557579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:36.570 [2024-11-28 09:00:30.557597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:16:36.570 [2024-11-28 09:00:30.557606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.570 [2024-11-28 09:00:30.557713] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 253ae1f8-61e7-4809-803e-55b1c37dcf6e 00:16:36.570 [2024-11-28 09:00:30.559071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.570 [2024-11-28 09:00:30.559103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:36.570 [2024-11-28 09:00:30.559115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:36.570 [2024-11-28 09:00:30.559123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.570 [2024-11-28 09:00:30.566264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.570 [2024-11-28 09:00:30.566293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:36.571 [2024-11-28 09:00:30.566304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.043 ms 00:16:36.571 [2024-11-28 09:00:30.566323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.571 [2024-11-28 09:00:30.566440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.571 [2024-11-28 09:00:30.566450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:36.571 [2024-11-28 09:00:30.566460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:36.571 [2024-11-28 09:00:30.566480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.571 [2024-11-28 09:00:30.566517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.571 [2024-11-28 09:00:30.566527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:36.571 [2024-11-28 09:00:30.566537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:36.571 [2024-11-28 09:00:30.566545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.571 [2024-11-28 09:00:30.566577] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:36.571 [2024-11-28 09:00:30.568356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.571 [2024-11-28 09:00:30.568548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:36.571 [2024-11-28 09:00:30.568563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.787 ms 00:16:36.571 [2024-11-28 09:00:30.568573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.571 [2024-11-28 09:00:30.568629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.571 [2024-11-28 09:00:30.568642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:36.571 [2024-11-28 09:00:30.568651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:36.571 [2024-11-28 09:00:30.568661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.571 [2024-11-28 09:00:30.568686] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:36.571 [2024-11-28 09:00:30.568860] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:36.571 [2024-11-28 09:00:30.568875] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:36.571 [2024-11-28 09:00:30.568887] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:36.571 [2024-11-28 09:00:30.568898] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:36.571 [2024-11-28 09:00:30.568909] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:36.571 [2024-11-28 09:00:30.568917] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:36.571 [2024-11-28 09:00:30.568926] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:36.571 [2024-11-28 09:00:30.568934] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:36.571 [2024-11-28 09:00:30.568945] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:36.571 [2024-11-28 09:00:30.568953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.571 [2024-11-28 09:00:30.568961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:36.571 [2024-11-28 09:00:30.568969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:16:36.571 [2024-11-28 09:00:30.568979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.571 [2024-11-28 09:00:30.569078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.571 [2024-11-28 09:00:30.569091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:36.571 [2024-11-28 09:00:30.569100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:36.571 [2024-11-28 09:00:30.569111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.571 [2024-11-28 09:00:30.569227] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:36.571 [2024-11-28 09:00:30.569240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:36.571 [2024-11-28 09:00:30.569249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:36.571 [2024-11-28 09:00:30.569260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.571 [2024-11-28 09:00:30.569271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:36.571 [2024-11-28 09:00:30.569280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:36.571 [2024-11-28 09:00:30.569287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:36.571 [2024-11-28 09:00:30.569296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:36.571 [2024-11-28 09:00:30.569305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:36.571 [2024-11-28 09:00:30.569314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:36.571 [2024-11-28 09:00:30.569322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:36.571 [2024-11-28 09:00:30.569331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:36.571 [2024-11-28 09:00:30.569339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:36.571 [2024-11-28 09:00:30.569352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:36.571 [2024-11-28 09:00:30.569360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:36.571 [2024-11-28 09:00:30.569370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.571 [2024-11-28 09:00:30.569377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:36.571 [2024-11-28 09:00:30.569386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:36.571 [2024-11-28 09:00:30.569394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.571 [2024-11-28 09:00:30.569405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:36.571 [2024-11-28 09:00:30.569413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:36.571 [2024-11-28 09:00:30.569421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.571 [2024-11-28 09:00:30.569429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:36.571 [2024-11-28 09:00:30.569438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:36.571 [2024-11-28 09:00:30.569446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.571 [2024-11-28 09:00:30.569456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:36.571 [2024-11-28 09:00:30.569464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:36.571 [2024-11-28 09:00:30.569474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.571 [2024-11-28 09:00:30.569480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:36.571 [2024-11-28 09:00:30.569490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:36.571 [2024-11-28 09:00:30.569498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:36.571 [2024-11-28 09:00:30.569507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:36.571 [2024-11-28 09:00:30.569513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:36.571 [2024-11-28 09:00:30.569521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:36.571 [2024-11-28 09:00:30.569529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:36.571 [2024-11-28 09:00:30.569536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:36.571 [2024-11-28 09:00:30.569542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:36.571 [2024-11-28 09:00:30.569550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:36.571 [2024-11-28 09:00:30.569556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:36.571 [2024-11-28 09:00:30.569564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.571 [2024-11-28 09:00:30.569571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:36.571 [2024-11-28 09:00:30.569581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:36.571 [2024-11-28 09:00:30.569587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.571 [2024-11-28 09:00:30.569611] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:36.571 [2024-11-28 09:00:30.569619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:36.571 [2024-11-28 09:00:30.569630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:36.571 [2024-11-28 09:00:30.569637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:36.571 [2024-11-28 09:00:30.569648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:36.571 [2024-11-28 09:00:30.569655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:36.571 [2024-11-28 09:00:30.569663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:36.571 [2024-11-28 09:00:30.569670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:36.571 [2024-11-28 09:00:30.569679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:36.571 [2024-11-28 09:00:30.569686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:36.571 [2024-11-28 09:00:30.569697] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:36.571 [2024-11-28 09:00:30.569706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:36.571 [2024-11-28 09:00:30.569717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:36.571 [2024-11-28 09:00:30.569724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:36.571 [2024-11-28 09:00:30.569733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:36.571 [2024-11-28 09:00:30.569740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:36.571 [2024-11-28 09:00:30.569749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:36.571 [2024-11-28 09:00:30.569756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:36.571 [2024-11-28 09:00:30.569767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:36.572 [2024-11-28 09:00:30.569773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:36.572 [2024-11-28 09:00:30.569786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:36.572 [2024-11-28 09:00:30.569793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:36.572 [2024-11-28 09:00:30.569817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:36.572 [2024-11-28 09:00:30.569825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:36.572 [2024-11-28 09:00:30.569834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:36.572 [2024-11-28 09:00:30.569841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:36.572 [2024-11-28 09:00:30.569850] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:36.572 [2024-11-28 09:00:30.569858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:36.572 [2024-11-28 09:00:30.569869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:36.572 [2024-11-28 09:00:30.569877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:36.572 [2024-11-28 09:00:30.569885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:36.572 [2024-11-28 09:00:30.569892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:36.572 [2024-11-28 09:00:30.569901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:36.572 [2024-11-28 09:00:30.569909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:36.572 [2024-11-28 09:00:30.569922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:16:36.572 [2024-11-28 09:00:30.569929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:36.572 [2024-11-28 09:00:30.570009] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:36.572 [2024-11-28 09:00:30.570020] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:39.102 [2024-11-28 09:00:33.129057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.102 [2024-11-28 09:00:33.129297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:39.102 [2024-11-28 09:00:33.129324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2559.025 ms 00:16:39.102 [2024-11-28 09:00:33.129334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.102 [2024-11-28 09:00:33.149128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.102 [2024-11-28 09:00:33.149199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:39.102 [2024-11-28 09:00:33.149230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.664 ms 00:16:39.102 [2024-11-28 09:00:33.149248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.102 [2024-11-28 09:00:33.149519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.102 [2024-11-28 09:00:33.149542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:39.102 [2024-11-28 09:00:33.149562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:16:39.102 [2024-11-28 09:00:33.149577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.102 [2024-11-28 09:00:33.163059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.102 [2024-11-28 09:00:33.163091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:39.102 [2024-11-28 09:00:33.163104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.425 ms 00:16:39.102 [2024-11-28 09:00:33.163112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.102 [2024-11-28 09:00:33.163178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.102 [2024-11-28 09:00:33.163187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:39.102 [2024-11-28 09:00:33.163198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:39.102 [2024-11-28 09:00:33.163205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.102 [2024-11-28 09:00:33.163610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.103 [2024-11-28 09:00:33.163625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:39.103 [2024-11-28 09:00:33.163636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:16:39.103 [2024-11-28 09:00:33.163657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.103 [2024-11-28 09:00:33.163788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.103 [2024-11-28 09:00:33.163816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:39.103 [2024-11-28 09:00:33.163838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:16:39.103 [2024-11-28 09:00:33.163847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.103 [2024-11-28 09:00:33.170693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.103 [2024-11-28 09:00:33.170891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:39.103 [2024-11-28 09:00:33.170911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.809 ms 00:16:39.103 [2024-11-28 09:00:33.170930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.103 [2024-11-28 09:00:33.179892] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:39.103 [2024-11-28 09:00:33.196934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.103 [2024-11-28 09:00:33.196965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:39.103 [2024-11-28 09:00:33.196976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.917 ms 00:16:39.103 [2024-11-28 09:00:33.196986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.361 [2024-11-28 09:00:33.255683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.361 [2024-11-28 09:00:33.255722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:39.361 [2024-11-28 09:00:33.255734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.625 ms 00:16:39.361 [2024-11-28 09:00:33.255746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.361 [2024-11-28 09:00:33.255951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.361 [2024-11-28 09:00:33.255969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:39.361 [2024-11-28 09:00:33.255978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:16:39.362 [2024-11-28 09:00:33.255988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.362 [2024-11-28 09:00:33.259403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.362 [2024-11-28 09:00:33.259438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:39.362 [2024-11-28 09:00:33.259448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.386 ms 00:16:39.362 [2024-11-28 09:00:33.259458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.362 [2024-11-28 09:00:33.262036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.362 [2024-11-28 09:00:33.262195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:39.362 [2024-11-28 09:00:33.262210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.536 ms 00:16:39.362 [2024-11-28 09:00:33.262219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.362 [2024-11-28 09:00:33.262544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.362 [2024-11-28 09:00:33.262567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:39.362 [2024-11-28 09:00:33.262575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:16:39.362 [2024-11-28 09:00:33.262599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.362 [2024-11-28 09:00:33.293452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.362 [2024-11-28 09:00:33.293626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:39.362 [2024-11-28 09:00:33.293662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.819 ms 00:16:39.362 [2024-11-28 09:00:33.293676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.362 [2024-11-28 09:00:33.298374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.362 [2024-11-28 09:00:33.298482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:39.362 [2024-11-28 09:00:33.298547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.623 ms 00:16:39.362 [2024-11-28 09:00:33.298574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.362 [2024-11-28 09:00:33.301846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.362 [2024-11-28 09:00:33.301945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:39.362 [2024-11-28 09:00:33.301997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.174 ms 00:16:39.362 [2024-11-28 09:00:33.302020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.362 [2024-11-28 09:00:33.305309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.362 [2024-11-28 09:00:33.305414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:39.362 [2024-11-28 09:00:33.305503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.237 ms 00:16:39.362 [2024-11-28 09:00:33.305531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.362 [2024-11-28 09:00:33.305783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.362 [2024-11-28 09:00:33.305924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:39.362 [2024-11-28 09:00:33.305983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:39.362 [2024-11-28 09:00:33.306011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.362 [2024-11-28 09:00:33.306131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.362 [2024-11-28 09:00:33.306197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:39.362 [2024-11-28 09:00:33.306245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:39.362 [2024-11-28 09:00:33.306270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.362 [2024-11-28 09:00:33.307276] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:39.362 [2024-11-28 09:00:33.308394] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2752.473 ms, result 0 00:16:39.362 [2024-11-28 09:00:33.309299] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:39.362 { 00:16:39.362 "name": "ftl0", 00:16:39.362 "uuid": "253ae1f8-61e7-4809-803e-55b1c37dcf6e" 00:16:39.362 } 00:16:39.362 09:00:33 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:39.362 09:00:33 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:39.362 09:00:33 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:39.362 09:00:33 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:39.362 09:00:33 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:39.362 09:00:33 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:39.362 09:00:33 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:39.620 09:00:33 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:39.620 [ 00:16:39.620 { 00:16:39.620 "name": "ftl0", 00:16:39.620 "aliases": [ 00:16:39.620 "253ae1f8-61e7-4809-803e-55b1c37dcf6e" 00:16:39.620 ], 00:16:39.620 "product_name": "FTL disk", 00:16:39.620 "block_size": 4096, 00:16:39.620 "num_blocks": 23592960, 00:16:39.620 "uuid": "253ae1f8-61e7-4809-803e-55b1c37dcf6e", 00:16:39.620 "assigned_rate_limits": { 00:16:39.620 "rw_ios_per_sec": 0, 00:16:39.620 "rw_mbytes_per_sec": 0, 00:16:39.620 "r_mbytes_per_sec": 0, 00:16:39.620 "w_mbytes_per_sec": 0 00:16:39.620 }, 00:16:39.620 "claimed": false, 00:16:39.620 "zoned": false, 00:16:39.620 "supported_io_types": { 00:16:39.620 "read": true, 00:16:39.620 "write": true, 00:16:39.620 "unmap": true, 00:16:39.620 "flush": true, 00:16:39.620 "reset": false, 00:16:39.620 "nvme_admin": false, 00:16:39.620 "nvme_io": false, 00:16:39.620 "nvme_io_md": false, 00:16:39.620 "write_zeroes": true, 00:16:39.620 "zcopy": false, 00:16:39.620 "get_zone_info": false, 00:16:39.620 "zone_management": false, 00:16:39.620 "zone_append": false, 00:16:39.620 "compare": false, 00:16:39.620 "compare_and_write": false, 00:16:39.620 "abort": false, 00:16:39.620 "seek_hole": false, 00:16:39.620 "seek_data": false, 00:16:39.620 "copy": false, 00:16:39.620 "nvme_iov_md": false 00:16:39.620 }, 00:16:39.620 "driver_specific": { 00:16:39.620 "ftl": { 00:16:39.620 "base_bdev": "a22ca64d-4417-43d1-9743-df8d50ab15c9", 00:16:39.620 "cache": "nvc0n1p0" 00:16:39.620 } 00:16:39.620 } 00:16:39.620 } 00:16:39.620 ] 00:16:39.879 09:00:33 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:39.879 09:00:33 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:39.879 09:00:33 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:39.879 09:00:33 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:39.879 09:00:33 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:40.137 09:00:34 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:40.137 { 00:16:40.137 "name": "ftl0", 00:16:40.137 "aliases": [ 00:16:40.137 "253ae1f8-61e7-4809-803e-55b1c37dcf6e" 00:16:40.137 ], 00:16:40.137 "product_name": "FTL disk", 00:16:40.137 "block_size": 4096, 00:16:40.137 "num_blocks": 23592960, 00:16:40.137 "uuid": "253ae1f8-61e7-4809-803e-55b1c37dcf6e", 00:16:40.137 "assigned_rate_limits": { 00:16:40.137 "rw_ios_per_sec": 0, 00:16:40.137 "rw_mbytes_per_sec": 0, 00:16:40.137 "r_mbytes_per_sec": 0, 00:16:40.137 "w_mbytes_per_sec": 0 00:16:40.137 }, 00:16:40.137 "claimed": false, 00:16:40.137 "zoned": false, 00:16:40.137 "supported_io_types": { 00:16:40.137 "read": true, 00:16:40.137 "write": true, 00:16:40.137 "unmap": true, 00:16:40.137 "flush": true, 00:16:40.137 "reset": false, 00:16:40.137 "nvme_admin": false, 00:16:40.137 "nvme_io": false, 00:16:40.137 "nvme_io_md": false, 00:16:40.137 "write_zeroes": true, 00:16:40.137 "zcopy": false, 00:16:40.137 "get_zone_info": false, 00:16:40.137 "zone_management": false, 00:16:40.137 "zone_append": false, 00:16:40.137 "compare": false, 00:16:40.137 "compare_and_write": false, 00:16:40.137 "abort": false, 00:16:40.137 "seek_hole": false, 00:16:40.137 "seek_data": false, 00:16:40.137 "copy": false, 00:16:40.137 "nvme_iov_md": false 00:16:40.137 }, 00:16:40.137 "driver_specific": { 00:16:40.137 "ftl": { 00:16:40.137 "base_bdev": "a22ca64d-4417-43d1-9743-df8d50ab15c9", 00:16:40.137 "cache": "nvc0n1p0" 00:16:40.137 } 00:16:40.137 } 00:16:40.137 } 00:16:40.137 ]' 00:16:40.137 09:00:34 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:40.137 09:00:34 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:40.137 09:00:34 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:40.398 [2024-11-28 09:00:34.359899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.398 [2024-11-28 09:00:34.360035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:40.398 [2024-11-28 09:00:34.360056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:40.398 [2024-11-28 09:00:34.360065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.398 [2024-11-28 09:00:34.360105] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:40.398 [2024-11-28 09:00:34.360648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.398 [2024-11-28 09:00:34.360668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:40.398 [2024-11-28 09:00:34.360677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:16:40.398 [2024-11-28 09:00:34.360687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.398 [2024-11-28 09:00:34.361249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.398 [2024-11-28 09:00:34.361275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:40.398 [2024-11-28 09:00:34.361287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:16:40.398 [2024-11-28 09:00:34.361297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.398 [2024-11-28 09:00:34.364955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.398 [2024-11-28 09:00:34.364983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:40.398 [2024-11-28 09:00:34.364993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.630 ms 00:16:40.398 [2024-11-28 09:00:34.365003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.398 [2024-11-28 09:00:34.371968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.398 [2024-11-28 09:00:34.371997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:40.399 [2024-11-28 09:00:34.372016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.921 ms 00:16:40.399 [2024-11-28 09:00:34.372031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.399 [2024-11-28 09:00:34.373813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.399 [2024-11-28 09:00:34.373846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:40.399 [2024-11-28 09:00:34.373855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:16:40.399 [2024-11-28 09:00:34.373865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.399 [2024-11-28 09:00:34.378528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.399 [2024-11-28 09:00:34.378564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:40.399 [2024-11-28 09:00:34.378574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.611 ms 00:16:40.399 [2024-11-28 09:00:34.378584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.399 [2024-11-28 09:00:34.378779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.399 [2024-11-28 09:00:34.378810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:40.399 [2024-11-28 09:00:34.378819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:16:40.399 [2024-11-28 09:00:34.378828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.399 [2024-11-28 09:00:34.380491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.399 [2024-11-28 09:00:34.380524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:40.399 [2024-11-28 09:00:34.380533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.629 ms 00:16:40.399 [2024-11-28 09:00:34.380547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.399 [2024-11-28 09:00:34.381982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.399 [2024-11-28 09:00:34.382017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:40.399 [2024-11-28 09:00:34.382027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.391 ms 00:16:40.399 [2024-11-28 09:00:34.382037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.399 [2024-11-28 09:00:34.383342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.399 [2024-11-28 09:00:34.383371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:40.399 [2024-11-28 09:00:34.383379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:16:40.399 [2024-11-28 09:00:34.383388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.399 [2024-11-28 09:00:34.384532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.399 [2024-11-28 09:00:34.384561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:40.399 [2024-11-28 09:00:34.384569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:16:40.399 [2024-11-28 09:00:34.384579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.399 [2024-11-28 09:00:34.384622] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:40.399 [2024-11-28 09:00:34.384638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.384998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.385006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.385017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.385025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.385035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.385043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.385052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.385059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:40.399 [2024-11-28 09:00:34.385068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:40.400 [2024-11-28 09:00:34.385562] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:40.400 [2024-11-28 09:00:34.385570] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 253ae1f8-61e7-4809-803e-55b1c37dcf6e 00:16:40.400 [2024-11-28 09:00:34.385580] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:40.400 [2024-11-28 09:00:34.385587] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:40.400 [2024-11-28 09:00:34.385596] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:40.400 [2024-11-28 09:00:34.385604] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:40.400 [2024-11-28 09:00:34.385615] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:40.400 [2024-11-28 09:00:34.385622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:40.400 [2024-11-28 09:00:34.385631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:40.400 [2024-11-28 09:00:34.385637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:40.400 [2024-11-28 09:00:34.385644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:40.400 [2024-11-28 09:00:34.385651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.400 [2024-11-28 09:00:34.385660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:40.401 [2024-11-28 09:00:34.385669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:16:40.401 [2024-11-28 09:00:34.385692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.387552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.401 [2024-11-28 09:00:34.387573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:40.401 [2024-11-28 09:00:34.387586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.835 ms 00:16:40.401 [2024-11-28 09:00:34.387595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.387706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.401 [2024-11-28 09:00:34.387718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:40.401 [2024-11-28 09:00:34.387726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:40.401 [2024-11-28 09:00:34.387736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.394141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.401 [2024-11-28 09:00:34.394170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:40.401 [2024-11-28 09:00:34.394181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.401 [2024-11-28 09:00:34.394193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.394286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.401 [2024-11-28 09:00:34.394299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:40.401 [2024-11-28 09:00:34.394320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.401 [2024-11-28 09:00:34.394331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.394384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.401 [2024-11-28 09:00:34.394396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:40.401 [2024-11-28 09:00:34.394404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.401 [2024-11-28 09:00:34.394417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.394452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.401 [2024-11-28 09:00:34.394463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:40.401 [2024-11-28 09:00:34.394471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.401 [2024-11-28 09:00:34.394481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.406297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.401 [2024-11-28 09:00:34.406333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:40.401 [2024-11-28 09:00:34.406346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.401 [2024-11-28 09:00:34.406356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.416254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.401 [2024-11-28 09:00:34.416291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:40.401 [2024-11-28 09:00:34.416301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.401 [2024-11-28 09:00:34.416314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.416370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.401 [2024-11-28 09:00:34.416382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:40.401 [2024-11-28 09:00:34.416390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.401 [2024-11-28 09:00:34.416400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.416467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.401 [2024-11-28 09:00:34.416479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:40.401 [2024-11-28 09:00:34.416486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.401 [2024-11-28 09:00:34.416496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.416586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.401 [2024-11-28 09:00:34.416611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:40.401 [2024-11-28 09:00:34.416619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.401 [2024-11-28 09:00:34.416628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.416695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.401 [2024-11-28 09:00:34.416707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:40.401 [2024-11-28 09:00:34.416715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.401 [2024-11-28 09:00:34.416726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.416788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.401 [2024-11-28 09:00:34.416812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:40.401 [2024-11-28 09:00:34.416820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.401 [2024-11-28 09:00:34.416830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.416908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.401 [2024-11-28 09:00:34.416922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:40.401 [2024-11-28 09:00:34.416932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.401 [2024-11-28 09:00:34.416951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.401 [2024-11-28 09:00:34.417147] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.234 ms, result 0 00:16:40.401 true 00:16:40.401 09:00:34 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 85570 00:16:40.401 09:00:34 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85570 ']' 00:16:40.401 09:00:34 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85570 00:16:40.401 09:00:34 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:40.401 09:00:34 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:40.401 09:00:34 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85570 00:16:40.402 09:00:34 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:40.402 09:00:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:40.402 killing process with pid 85570 00:16:40.402 09:00:34 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85570' 00:16:40.402 09:00:34 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85570 00:16:40.402 09:00:34 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85570 00:16:45.678 09:00:39 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:46.622 65536+0 records in 00:16:46.622 65536+0 records out 00:16:46.622 268435456 bytes (268 MB, 256 MiB) copied, 1.09742 s, 245 MB/s 00:16:46.622 09:00:40 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:46.622 [2024-11-28 09:00:40.527978] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:46.622 [2024-11-28 09:00:40.528113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85735 ] 00:16:46.622 [2024-11-28 09:00:40.677303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.883 [2024-11-28 09:00:40.741701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.883 [2024-11-28 09:00:40.878638] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:46.883 [2024-11-28 09:00:40.878719] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:47.147 [2024-11-28 09:00:41.041673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.147 [2024-11-28 09:00:41.041734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:47.147 [2024-11-28 09:00:41.041750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:47.147 [2024-11-28 09:00:41.041760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.147 [2024-11-28 09:00:41.044562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.147 [2024-11-28 09:00:41.044610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:47.147 [2024-11-28 09:00:41.044628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.780 ms 00:16:47.147 [2024-11-28 09:00:41.044640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.147 [2024-11-28 09:00:41.044749] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:47.147 [2024-11-28 09:00:41.045070] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:47.147 [2024-11-28 09:00:41.045092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.147 [2024-11-28 09:00:41.045102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:47.147 [2024-11-28 09:00:41.045118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:16:47.147 [2024-11-28 09:00:41.045125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.147 [2024-11-28 09:00:41.047393] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:47.147 [2024-11-28 09:00:41.050489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.147 [2024-11-28 09:00:41.050522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:47.147 [2024-11-28 09:00:41.050531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.100 ms 00:16:47.147 [2024-11-28 09:00:41.050545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.147 [2024-11-28 09:00:41.050603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.147 [2024-11-28 09:00:41.050613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:47.147 [2024-11-28 09:00:41.050621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:47.147 [2024-11-28 09:00:41.050629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.147 [2024-11-28 09:00:41.057246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.147 [2024-11-28 09:00:41.057273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:47.147 [2024-11-28 09:00:41.057282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.580 ms 00:16:47.147 [2024-11-28 09:00:41.057294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.147 [2024-11-28 09:00:41.057404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.147 [2024-11-28 09:00:41.057419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:47.147 [2024-11-28 09:00:41.057428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:16:47.147 [2024-11-28 09:00:41.057436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.147 [2024-11-28 09:00:41.057466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.147 [2024-11-28 09:00:41.057478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:47.147 [2024-11-28 09:00:41.057486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:47.147 [2024-11-28 09:00:41.057494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.147 [2024-11-28 09:00:41.057515] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:47.147 [2024-11-28 09:00:41.059204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.147 [2024-11-28 09:00:41.059228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:47.147 [2024-11-28 09:00:41.059238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.694 ms 00:16:47.147 [2024-11-28 09:00:41.059246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.147 [2024-11-28 09:00:41.059281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.147 [2024-11-28 09:00:41.059294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:47.147 [2024-11-28 09:00:41.059304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:47.147 [2024-11-28 09:00:41.059312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.147 [2024-11-28 09:00:41.059334] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:47.147 [2024-11-28 09:00:41.059354] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:47.147 [2024-11-28 09:00:41.059393] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:47.147 [2024-11-28 09:00:41.059408] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:47.147 [2024-11-28 09:00:41.059517] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:47.147 [2024-11-28 09:00:41.059534] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:47.147 [2024-11-28 09:00:41.059546] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:47.147 [2024-11-28 09:00:41.059556] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:47.147 [2024-11-28 09:00:41.059565] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:47.148 [2024-11-28 09:00:41.059578] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:47.148 [2024-11-28 09:00:41.059589] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:47.148 [2024-11-28 09:00:41.059596] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:47.148 [2024-11-28 09:00:41.059605] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:47.148 [2024-11-28 09:00:41.059612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.148 [2024-11-28 09:00:41.059622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:47.148 [2024-11-28 09:00:41.059631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:16:47.148 [2024-11-28 09:00:41.059639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.148 [2024-11-28 09:00:41.059731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.148 [2024-11-28 09:00:41.059740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:47.148 [2024-11-28 09:00:41.059748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:47.148 [2024-11-28 09:00:41.059762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.148 [2024-11-28 09:00:41.059877] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:47.148 [2024-11-28 09:00:41.059894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:47.148 [2024-11-28 09:00:41.059904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:47.148 [2024-11-28 09:00:41.059920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.148 [2024-11-28 09:00:41.059929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:47.148 [2024-11-28 09:00:41.059938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:47.148 [2024-11-28 09:00:41.059946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:47.148 [2024-11-28 09:00:41.059954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:47.148 [2024-11-28 09:00:41.059965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:47.148 [2024-11-28 09:00:41.059973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:47.148 [2024-11-28 09:00:41.059981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:47.148 [2024-11-28 09:00:41.059992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:47.148 [2024-11-28 09:00:41.060000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:47.148 [2024-11-28 09:00:41.060008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:47.148 [2024-11-28 09:00:41.060016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:47.148 [2024-11-28 09:00:41.060024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.148 [2024-11-28 09:00:41.060032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:47.148 [2024-11-28 09:00:41.060039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:47.148 [2024-11-28 09:00:41.060047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.148 [2024-11-28 09:00:41.060055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:47.148 [2024-11-28 09:00:41.060063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:47.148 [2024-11-28 09:00:41.060071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:47.148 [2024-11-28 09:00:41.060079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:47.148 [2024-11-28 09:00:41.060087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:47.148 [2024-11-28 09:00:41.060098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:47.148 [2024-11-28 09:00:41.060106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:47.148 [2024-11-28 09:00:41.060114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:47.148 [2024-11-28 09:00:41.060122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:47.148 [2024-11-28 09:00:41.060129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:47.148 [2024-11-28 09:00:41.060137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:47.148 [2024-11-28 09:00:41.060144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:47.148 [2024-11-28 09:00:41.060152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:47.148 [2024-11-28 09:00:41.060160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:47.148 [2024-11-28 09:00:41.060167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:47.148 [2024-11-28 09:00:41.060175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:47.148 [2024-11-28 09:00:41.060182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:47.148 [2024-11-28 09:00:41.060189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:47.148 [2024-11-28 09:00:41.060197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:47.148 [2024-11-28 09:00:41.060205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:47.148 [2024-11-28 09:00:41.060212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.148 [2024-11-28 09:00:41.060222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:47.148 [2024-11-28 09:00:41.060230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:47.148 [2024-11-28 09:00:41.060236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.148 [2024-11-28 09:00:41.060244] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:47.148 [2024-11-28 09:00:41.060252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:47.148 [2024-11-28 09:00:41.060261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:47.148 [2024-11-28 09:00:41.060268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:47.148 [2024-11-28 09:00:41.060276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:47.148 [2024-11-28 09:00:41.060283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:47.148 [2024-11-28 09:00:41.060290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:47.148 [2024-11-28 09:00:41.060298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:47.148 [2024-11-28 09:00:41.060304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:47.148 [2024-11-28 09:00:41.060311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:47.148 [2024-11-28 09:00:41.060319] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:47.148 [2024-11-28 09:00:41.060328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:47.148 [2024-11-28 09:00:41.060340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:47.148 [2024-11-28 09:00:41.060348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:47.148 [2024-11-28 09:00:41.060355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:47.148 [2024-11-28 09:00:41.060362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:47.148 [2024-11-28 09:00:41.060369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:47.148 [2024-11-28 09:00:41.060376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:47.148 [2024-11-28 09:00:41.060383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:47.148 [2024-11-28 09:00:41.060390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:47.148 [2024-11-28 09:00:41.060397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:47.148 [2024-11-28 09:00:41.060404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:47.148 [2024-11-28 09:00:41.060411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:47.148 [2024-11-28 09:00:41.060418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:47.148 [2024-11-28 09:00:41.060425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:47.148 [2024-11-28 09:00:41.060432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:47.148 [2024-11-28 09:00:41.060440] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:47.148 [2024-11-28 09:00:41.060449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:47.148 [2024-11-28 09:00:41.060457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:47.148 [2024-11-28 09:00:41.060466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:47.148 [2024-11-28 09:00:41.060473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:47.148 [2024-11-28 09:00:41.060480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:47.148 [2024-11-28 09:00:41.060489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.148 [2024-11-28 09:00:41.060501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:47.148 [2024-11-28 09:00:41.060511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:16:47.148 [2024-11-28 09:00:41.060518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.148 [2024-11-28 09:00:41.081175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.148 [2024-11-28 09:00:41.081226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:47.148 [2024-11-28 09:00:41.081242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.605 ms 00:16:47.148 [2024-11-28 09:00:41.081253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.148 [2024-11-28 09:00:41.081439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.148 [2024-11-28 09:00:41.081464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:47.148 [2024-11-28 09:00:41.081476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:16:47.148 [2024-11-28 09:00:41.081491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.092212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.092243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:47.149 [2024-11-28 09:00:41.092256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.693 ms 00:16:47.149 [2024-11-28 09:00:41.092264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.092324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.092334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:47.149 [2024-11-28 09:00:41.092345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:47.149 [2024-11-28 09:00:41.092353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.092759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.092774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:47.149 [2024-11-28 09:00:41.092783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:16:47.149 [2024-11-28 09:00:41.092790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.092956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.092967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:47.149 [2024-11-28 09:00:41.092977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:16:47.149 [2024-11-28 09:00:41.092988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.099224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.099260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:47.149 [2024-11-28 09:00:41.099270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.215 ms 00:16:47.149 [2024-11-28 09:00:41.099277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.102438] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:47.149 [2024-11-28 09:00:41.102469] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:47.149 [2024-11-28 09:00:41.102486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.102494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:47.149 [2024-11-28 09:00:41.102502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.128 ms 00:16:47.149 [2024-11-28 09:00:41.102509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.117351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.117393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:47.149 [2024-11-28 09:00:41.117404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.784 ms 00:16:47.149 [2024-11-28 09:00:41.117412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.119505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.119530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:47.149 [2024-11-28 09:00:41.119539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.028 ms 00:16:47.149 [2024-11-28 09:00:41.119546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.121403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.121428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:47.149 [2024-11-28 09:00:41.121443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.820 ms 00:16:47.149 [2024-11-28 09:00:41.121450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.121780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.121792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:47.149 [2024-11-28 09:00:41.121819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:16:47.149 [2024-11-28 09:00:41.121826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.140922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.140956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:47.149 [2024-11-28 09:00:41.140968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.072 ms 00:16:47.149 [2024-11-28 09:00:41.140976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.148751] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:47.149 [2024-11-28 09:00:41.166209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.166240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:47.149 [2024-11-28 09:00:41.166257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.154 ms 00:16:47.149 [2024-11-28 09:00:41.166266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.166343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.166354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:47.149 [2024-11-28 09:00:41.166363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:47.149 [2024-11-28 09:00:41.166371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.166424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.166436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:47.149 [2024-11-28 09:00:41.166445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:47.149 [2024-11-28 09:00:41.166453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.166477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.166486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:47.149 [2024-11-28 09:00:41.166494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:47.149 [2024-11-28 09:00:41.166502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.166534] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:47.149 [2024-11-28 09:00:41.166545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.166553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:47.149 [2024-11-28 09:00:41.166564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:47.149 [2024-11-28 09:00:41.166571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.171058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.171089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:47.149 [2024-11-28 09:00:41.171099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.464 ms 00:16:47.149 [2024-11-28 09:00:41.171108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.171186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.149 [2024-11-28 09:00:41.171196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:47.149 [2024-11-28 09:00:41.171208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:47.149 [2024-11-28 09:00:41.171215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.149 [2024-11-28 09:00:41.172110] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:47.149 [2024-11-28 09:00:41.173132] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.127 ms, result 0 00:16:47.149 [2024-11-28 09:00:41.174184] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:47.149 [2024-11-28 09:00:41.183230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:48.094  [2024-11-28T09:00:43.596Z] Copying: 13/256 [MB] (13 MBps) [2024-11-28T09:00:44.540Z] Copying: 34/256 [MB] (21 MBps) [2024-11-28T09:00:45.485Z] Copying: 47/256 [MB] (13 MBps) [2024-11-28T09:00:46.429Z] Copying: 64/256 [MB] (16 MBps) [2024-11-28T09:00:47.369Z] Copying: 80/256 [MB] (16 MBps) [2024-11-28T09:00:48.309Z] Copying: 94/256 [MB] (13 MBps) [2024-11-28T09:00:49.251Z] Copying: 107/256 [MB] (12 MBps) [2024-11-28T09:00:50.626Z] Copying: 117/256 [MB] (10 MBps) [2024-11-28T09:00:51.198Z] Copying: 130/256 [MB] (12 MBps) [2024-11-28T09:00:52.580Z] Copying: 141/256 [MB] (11 MBps) [2024-11-28T09:00:53.514Z] Copying: 152/256 [MB] (10 MBps) [2024-11-28T09:00:54.448Z] Copying: 164/256 [MB] (12 MBps) [2024-11-28T09:00:55.386Z] Copying: 177/256 [MB] (12 MBps) [2024-11-28T09:00:56.319Z] Copying: 187/256 [MB] (10 MBps) [2024-11-28T09:00:57.300Z] Copying: 199/256 [MB] (11 MBps) [2024-11-28T09:00:58.235Z] Copying: 211/256 [MB] (12 MBps) [2024-11-28T09:00:59.610Z] Copying: 224/256 [MB] (12 MBps) [2024-11-28T09:01:00.545Z] Copying: 237/256 [MB] (12 MBps) [2024-11-28T09:01:00.806Z] Copying: 249/256 [MB] (12 MBps) [2024-11-28T09:01:00.806Z] Copying: 256/256 [MB] (average 13 MBps)[2024-11-28 09:01:00.681094] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:06.686 [2024-11-28 09:01:00.682399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.686 [2024-11-28 09:01:00.682431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:06.686 [2024-11-28 09:01:00.682444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:06.686 [2024-11-28 09:01:00.682456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.686 [2024-11-28 09:01:00.682473] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:06.686 [2024-11-28 09:01:00.683016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.686 [2024-11-28 09:01:00.683040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:06.686 [2024-11-28 09:01:00.683056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:17:06.686 [2024-11-28 09:01:00.683062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.686 [2024-11-28 09:01:00.685593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.686 [2024-11-28 09:01:00.685625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:06.686 [2024-11-28 09:01:00.685633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.512 ms 00:17:06.686 [2024-11-28 09:01:00.685640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.686 [2024-11-28 09:01:00.691838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.686 [2024-11-28 09:01:00.691871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:06.686 [2024-11-28 09:01:00.691879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.184 ms 00:17:06.686 [2024-11-28 09:01:00.691885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.686 [2024-11-28 09:01:00.697202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.686 [2024-11-28 09:01:00.697229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:06.686 [2024-11-28 09:01:00.697238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.238 ms 00:17:06.686 [2024-11-28 09:01:00.697245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.686 [2024-11-28 09:01:00.698921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.686 [2024-11-28 09:01:00.698950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:06.686 [2024-11-28 09:01:00.698957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.634 ms 00:17:06.686 [2024-11-28 09:01:00.698963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.686 [2024-11-28 09:01:00.702760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.686 [2024-11-28 09:01:00.702805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:06.686 [2024-11-28 09:01:00.702817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.762 ms 00:17:06.686 [2024-11-28 09:01:00.702831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.686 [2024-11-28 09:01:00.702925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.686 [2024-11-28 09:01:00.702933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:06.686 [2024-11-28 09:01:00.702943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:06.686 [2024-11-28 09:01:00.702949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.686 [2024-11-28 09:01:00.705619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.686 [2024-11-28 09:01:00.705649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:06.686 [2024-11-28 09:01:00.705656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.657 ms 00:17:06.686 [2024-11-28 09:01:00.705662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.686 [2024-11-28 09:01:00.707846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.686 [2024-11-28 09:01:00.707897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:06.686 [2024-11-28 09:01:00.707904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:17:06.686 [2024-11-28 09:01:00.707910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.686 [2024-11-28 09:01:00.709649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.686 [2024-11-28 09:01:00.709678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:06.686 [2024-11-28 09:01:00.709685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.713 ms 00:17:06.686 [2024-11-28 09:01:00.709691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.686 [2024-11-28 09:01:00.711421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.686 [2024-11-28 09:01:00.711449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:06.686 [2024-11-28 09:01:00.711456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.679 ms 00:17:06.686 [2024-11-28 09:01:00.711461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.686 [2024-11-28 09:01:00.711486] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:06.686 [2024-11-28 09:01:00.711497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:06.686 [2024-11-28 09:01:00.711509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:06.686 [2024-11-28 09:01:00.711515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:06.686 [2024-11-28 09:01:00.711521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:06.686 [2024-11-28 09:01:00.711527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:06.686 [2024-11-28 09:01:00.711533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:06.686 [2024-11-28 09:01:00.711538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:06.686 [2024-11-28 09:01:00.711544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:06.687 [2024-11-28 09:01:00.711965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.711971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.711977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.711982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.711989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.711994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:06.688 [2024-11-28 09:01:00.712100] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:06.688 [2024-11-28 09:01:00.712106] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 253ae1f8-61e7-4809-803e-55b1c37dcf6e 00:17:06.688 [2024-11-28 09:01:00.712113] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:06.688 [2024-11-28 09:01:00.712123] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:06.688 [2024-11-28 09:01:00.712129] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:06.688 [2024-11-28 09:01:00.712134] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:06.688 [2024-11-28 09:01:00.712140] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:06.688 [2024-11-28 09:01:00.712146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:06.688 [2024-11-28 09:01:00.712151] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:06.688 [2024-11-28 09:01:00.712156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:06.688 [2024-11-28 09:01:00.712161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:06.688 [2024-11-28 09:01:00.712166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.688 [2024-11-28 09:01:00.712173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:06.688 [2024-11-28 09:01:00.712179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:17:06.688 [2024-11-28 09:01:00.712186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.688 [2024-11-28 09:01:00.713972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.688 [2024-11-28 09:01:00.713997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:06.688 [2024-11-28 09:01:00.714005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.764 ms 00:17:06.688 [2024-11-28 09:01:00.714012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.688 [2024-11-28 09:01:00.714100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.688 [2024-11-28 09:01:00.714119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:06.688 [2024-11-28 09:01:00.714129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:06.688 [2024-11-28 09:01:00.714136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.688 [2024-11-28 09:01:00.719757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.688 [2024-11-28 09:01:00.719786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:06.688 [2024-11-28 09:01:00.719794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.688 [2024-11-28 09:01:00.719834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.688 [2024-11-28 09:01:00.719882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.688 [2024-11-28 09:01:00.719890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:06.688 [2024-11-28 09:01:00.719898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.688 [2024-11-28 09:01:00.719904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.688 [2024-11-28 09:01:00.719937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.688 [2024-11-28 09:01:00.719944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:06.688 [2024-11-28 09:01:00.719951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.688 [2024-11-28 09:01:00.719957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.688 [2024-11-28 09:01:00.719971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.688 [2024-11-28 09:01:00.719978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:06.688 [2024-11-28 09:01:00.719984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.688 [2024-11-28 09:01:00.719992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.688 [2024-11-28 09:01:00.731003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.688 [2024-11-28 09:01:00.731037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:06.688 [2024-11-28 09:01:00.731046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.688 [2024-11-28 09:01:00.731052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.688 [2024-11-28 09:01:00.739624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.688 [2024-11-28 09:01:00.739659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:06.688 [2024-11-28 09:01:00.739674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.688 [2024-11-28 09:01:00.739681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.688 [2024-11-28 09:01:00.739734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.688 [2024-11-28 09:01:00.739747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:06.688 [2024-11-28 09:01:00.739753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.688 [2024-11-28 09:01:00.739760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.688 [2024-11-28 09:01:00.739785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.688 [2024-11-28 09:01:00.739792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:06.688 [2024-11-28 09:01:00.739851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.688 [2024-11-28 09:01:00.739859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.688 [2024-11-28 09:01:00.739924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.688 [2024-11-28 09:01:00.739937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:06.689 [2024-11-28 09:01:00.739944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.689 [2024-11-28 09:01:00.739951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.689 [2024-11-28 09:01:00.739976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.689 [2024-11-28 09:01:00.739985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:06.689 [2024-11-28 09:01:00.739992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.689 [2024-11-28 09:01:00.739998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.689 [2024-11-28 09:01:00.740037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.689 [2024-11-28 09:01:00.740046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:06.689 [2024-11-28 09:01:00.740052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.689 [2024-11-28 09:01:00.740058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.689 [2024-11-28 09:01:00.740102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.689 [2024-11-28 09:01:00.740112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:06.689 [2024-11-28 09:01:00.740118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.689 [2024-11-28 09:01:00.740124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.689 [2024-11-28 09:01:00.740251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.823 ms, result 0 00:17:06.949 00:17:06.949 00:17:06.949 09:01:01 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85955 00:17:06.949 09:01:01 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85955 00:17:06.949 09:01:01 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:06.949 09:01:01 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85955 ']' 00:17:06.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.949 09:01:01 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.949 09:01:01 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.949 09:01:01 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.949 09:01:01 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.949 09:01:01 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:07.209 [2024-11-28 09:01:01.085874] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:07.209 [2024-11-28 09:01:01.085998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85955 ] 00:17:07.209 [2024-11-28 09:01:01.229369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.209 [2024-11-28 09:01:01.272835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.798 09:01:01 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.798 09:01:01 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:07.798 09:01:01 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:08.055 [2024-11-28 09:01:02.116844] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:08.055 [2024-11-28 09:01:02.116926] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:08.315 [2024-11-28 09:01:02.267342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.315 [2024-11-28 09:01:02.267380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:08.315 [2024-11-28 09:01:02.267394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:08.315 [2024-11-28 09:01:02.267403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.315 [2024-11-28 09:01:02.269242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.315 [2024-11-28 09:01:02.269274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:08.315 [2024-11-28 09:01:02.269282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.825 ms 00:17:08.315 [2024-11-28 09:01:02.269289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.315 [2024-11-28 09:01:02.269353] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:08.315 [2024-11-28 09:01:02.269610] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:08.315 [2024-11-28 09:01:02.269622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.315 [2024-11-28 09:01:02.269629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:08.315 [2024-11-28 09:01:02.269636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:17:08.315 [2024-11-28 09:01:02.269644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.315 [2024-11-28 09:01:02.270977] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:08.315 [2024-11-28 09:01:02.273724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.315 [2024-11-28 09:01:02.273754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:08.315 [2024-11-28 09:01:02.273763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.745 ms 00:17:08.315 [2024-11-28 09:01:02.273770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.315 [2024-11-28 09:01:02.273830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.315 [2024-11-28 09:01:02.273838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:08.315 [2024-11-28 09:01:02.273848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:08.315 [2024-11-28 09:01:02.273854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.315 [2024-11-28 09:01:02.280104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.315 [2024-11-28 09:01:02.280129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:08.315 [2024-11-28 09:01:02.280141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.212 ms 00:17:08.315 [2024-11-28 09:01:02.280147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.315 [2024-11-28 09:01:02.280231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.315 [2024-11-28 09:01:02.280239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:08.315 [2024-11-28 09:01:02.280248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:08.315 [2024-11-28 09:01:02.280253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.315 [2024-11-28 09:01:02.280274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.315 [2024-11-28 09:01:02.280282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:08.315 [2024-11-28 09:01:02.280291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:08.315 [2024-11-28 09:01:02.280298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.315 [2024-11-28 09:01:02.280318] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:08.315 [2024-11-28 09:01:02.281923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.315 [2024-11-28 09:01:02.281948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:08.315 [2024-11-28 09:01:02.281955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.610 ms 00:17:08.315 [2024-11-28 09:01:02.281963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.315 [2024-11-28 09:01:02.281996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.315 [2024-11-28 09:01:02.282004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:08.315 [2024-11-28 09:01:02.282011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:08.315 [2024-11-28 09:01:02.282018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.315 [2024-11-28 09:01:02.282035] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:08.315 [2024-11-28 09:01:02.282051] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:08.315 [2024-11-28 09:01:02.282083] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:08.315 [2024-11-28 09:01:02.282099] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:08.315 [2024-11-28 09:01:02.282181] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:08.315 [2024-11-28 09:01:02.282192] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:08.315 [2024-11-28 09:01:02.282200] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:08.315 [2024-11-28 09:01:02.282210] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:08.315 [2024-11-28 09:01:02.282222] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:08.316 [2024-11-28 09:01:02.282231] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:08.316 [2024-11-28 09:01:02.282237] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:08.316 [2024-11-28 09:01:02.282245] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:08.316 [2024-11-28 09:01:02.282252] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:08.316 [2024-11-28 09:01:02.282259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.316 [2024-11-28 09:01:02.282267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:08.316 [2024-11-28 09:01:02.282275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:17:08.316 [2024-11-28 09:01:02.282280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.316 [2024-11-28 09:01:02.282349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.316 [2024-11-28 09:01:02.282362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:08.316 [2024-11-28 09:01:02.282370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:08.316 [2024-11-28 09:01:02.282376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.316 [2024-11-28 09:01:02.282456] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:08.316 [2024-11-28 09:01:02.282465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:08.316 [2024-11-28 09:01:02.282475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:08.316 [2024-11-28 09:01:02.282481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.316 [2024-11-28 09:01:02.282489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:08.316 [2024-11-28 09:01:02.282495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:08.316 [2024-11-28 09:01:02.282501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:08.316 [2024-11-28 09:01:02.282508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:08.316 [2024-11-28 09:01:02.282520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:08.316 [2024-11-28 09:01:02.282525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:08.316 [2024-11-28 09:01:02.282532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:08.316 [2024-11-28 09:01:02.282537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:08.316 [2024-11-28 09:01:02.282543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:08.316 [2024-11-28 09:01:02.282551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:08.316 [2024-11-28 09:01:02.282557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:08.316 [2024-11-28 09:01:02.282564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.316 [2024-11-28 09:01:02.282572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:08.316 [2024-11-28 09:01:02.282578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:08.316 [2024-11-28 09:01:02.282586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.316 [2024-11-28 09:01:02.282593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:08.316 [2024-11-28 09:01:02.282602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:08.316 [2024-11-28 09:01:02.282608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:08.316 [2024-11-28 09:01:02.282615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:08.316 [2024-11-28 09:01:02.282621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:08.316 [2024-11-28 09:01:02.282629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:08.316 [2024-11-28 09:01:02.282635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:08.316 [2024-11-28 09:01:02.282643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:08.316 [2024-11-28 09:01:02.282648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:08.316 [2024-11-28 09:01:02.282656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:08.316 [2024-11-28 09:01:02.282663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:08.316 [2024-11-28 09:01:02.282670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:08.316 [2024-11-28 09:01:02.282676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:08.316 [2024-11-28 09:01:02.282684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:08.316 [2024-11-28 09:01:02.282690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:08.316 [2024-11-28 09:01:02.282698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:08.316 [2024-11-28 09:01:02.282704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:08.316 [2024-11-28 09:01:02.282713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:08.316 [2024-11-28 09:01:02.282719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:08.316 [2024-11-28 09:01:02.282727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:08.316 [2024-11-28 09:01:02.282733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.316 [2024-11-28 09:01:02.282740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:08.316 [2024-11-28 09:01:02.282746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:08.316 [2024-11-28 09:01:02.282753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.316 [2024-11-28 09:01:02.282759] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:08.316 [2024-11-28 09:01:02.282767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:08.316 [2024-11-28 09:01:02.282777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:08.316 [2024-11-28 09:01:02.282785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.316 [2024-11-28 09:01:02.282791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:08.316 [2024-11-28 09:01:02.282814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:08.316 [2024-11-28 09:01:02.282822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:08.316 [2024-11-28 09:01:02.282829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:08.316 [2024-11-28 09:01:02.282836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:08.316 [2024-11-28 09:01:02.282845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:08.316 [2024-11-28 09:01:02.282853] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:08.316 [2024-11-28 09:01:02.282863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:08.316 [2024-11-28 09:01:02.282872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:08.316 [2024-11-28 09:01:02.282881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:08.316 [2024-11-28 09:01:02.282888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:08.316 [2024-11-28 09:01:02.282896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:08.316 [2024-11-28 09:01:02.282903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:08.316 [2024-11-28 09:01:02.282912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:08.316 [2024-11-28 09:01:02.282918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:08.316 [2024-11-28 09:01:02.282927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:08.316 [2024-11-28 09:01:02.282933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:08.316 [2024-11-28 09:01:02.282941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:08.316 [2024-11-28 09:01:02.282947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:08.316 [2024-11-28 09:01:02.282953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:08.317 [2024-11-28 09:01:02.282958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:08.317 [2024-11-28 09:01:02.282966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:08.317 [2024-11-28 09:01:02.282971] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:08.317 [2024-11-28 09:01:02.282982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:08.317 [2024-11-28 09:01:02.282989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:08.317 [2024-11-28 09:01:02.282995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:08.317 [2024-11-28 09:01:02.283000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:08.317 [2024-11-28 09:01:02.283007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:08.317 [2024-11-28 09:01:02.283013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.283023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:08.317 [2024-11-28 09:01:02.283030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:17:08.317 [2024-11-28 09:01:02.283038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.294321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.294349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:08.317 [2024-11-28 09:01:02.294358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.240 ms 00:17:08.317 [2024-11-28 09:01:02.294369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.294465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.294478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:08.317 [2024-11-28 09:01:02.294486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:08.317 [2024-11-28 09:01:02.294494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.304179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.304209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:08.317 [2024-11-28 09:01:02.304216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.667 ms 00:17:08.317 [2024-11-28 09:01:02.304224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.304258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.304269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:08.317 [2024-11-28 09:01:02.304275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:08.317 [2024-11-28 09:01:02.304283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.304676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.304702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:08.317 [2024-11-28 09:01:02.304714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:17:08.317 [2024-11-28 09:01:02.304722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.304882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.304901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:08.317 [2024-11-28 09:01:02.304915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:17:08.317 [2024-11-28 09:01:02.304927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.327368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.327462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:08.317 [2024-11-28 09:01:02.327494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.405 ms 00:17:08.317 [2024-11-28 09:01:02.327520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.331229] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:08.317 [2024-11-28 09:01:02.331262] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:08.317 [2024-11-28 09:01:02.331272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.331281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:08.317 [2024-11-28 09:01:02.331287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.442 ms 00:17:08.317 [2024-11-28 09:01:02.331294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.342954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.342988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:08.317 [2024-11-28 09:01:02.342997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.621 ms 00:17:08.317 [2024-11-28 09:01:02.343007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.345081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.345112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:08.317 [2024-11-28 09:01:02.345119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.017 ms 00:17:08.317 [2024-11-28 09:01:02.345127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.346789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.346826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:08.317 [2024-11-28 09:01:02.346834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.630 ms 00:17:08.317 [2024-11-28 09:01:02.346841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.347099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.347113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:08.317 [2024-11-28 09:01:02.347121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:17:08.317 [2024-11-28 09:01:02.347129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.364654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.364686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:08.317 [2024-11-28 09:01:02.364695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.497 ms 00:17:08.317 [2024-11-28 09:01:02.364705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.371219] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:08.317 [2024-11-28 09:01:02.385894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.385922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:08.317 [2024-11-28 09:01:02.385934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.117 ms 00:17:08.317 [2024-11-28 09:01:02.385941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.386013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.386021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:08.317 [2024-11-28 09:01:02.386032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:08.317 [2024-11-28 09:01:02.386038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.386084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.386096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:08.317 [2024-11-28 09:01:02.386104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:08.317 [2024-11-28 09:01:02.386110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.386131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.386138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:08.317 [2024-11-28 09:01:02.386152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:08.317 [2024-11-28 09:01:02.386159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.317 [2024-11-28 09:01:02.386187] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:08.317 [2024-11-28 09:01:02.386194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.317 [2024-11-28 09:01:02.386201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:08.317 [2024-11-28 09:01:02.386208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:08.317 [2024-11-28 09:01:02.386215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.318 [2024-11-28 09:01:02.390485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.318 [2024-11-28 09:01:02.390520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:08.318 [2024-11-28 09:01:02.390528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.252 ms 00:17:08.318 [2024-11-28 09:01:02.390540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.318 [2024-11-28 09:01:02.390606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.318 [2024-11-28 09:01:02.390615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:08.318 [2024-11-28 09:01:02.390625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:08.318 [2024-11-28 09:01:02.390633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.318 [2024-11-28 09:01:02.391401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:08.318 [2024-11-28 09:01:02.392251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 123.810 ms, result 0 00:17:08.318 [2024-11-28 09:01:02.393986] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:08.318 Some configs were skipped because the RPC state that can call them passed over. 00:17:08.318 09:01:02 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:08.575 [2024-11-28 09:01:02.608994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.575 [2024-11-28 09:01:02.609028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:08.575 [2024-11-28 09:01:02.609039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.698 ms 00:17:08.575 [2024-11-28 09:01:02.609046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.575 [2024-11-28 09:01:02.609072] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.783 ms, result 0 00:17:08.575 true 00:17:08.575 09:01:02 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:08.833 [2024-11-28 09:01:02.808564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.833 [2024-11-28 09:01:02.808598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:08.833 [2024-11-28 09:01:02.808605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.134 ms 00:17:08.833 [2024-11-28 09:01:02.808613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.833 [2024-11-28 09:01:02.808638] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.206 ms, result 0 00:17:08.833 true 00:17:08.833 09:01:02 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85955 00:17:08.833 09:01:02 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85955 ']' 00:17:08.833 09:01:02 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85955 00:17:08.833 09:01:02 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:08.833 09:01:02 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:08.833 09:01:02 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85955 00:17:08.833 killing process with pid 85955 00:17:08.833 09:01:02 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:08.833 09:01:02 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:08.833 09:01:02 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85955' 00:17:08.833 09:01:02 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85955 00:17:08.833 09:01:02 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85955 00:17:09.092 [2024-11-28 09:01:02.963162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.092 [2024-11-28 09:01:02.963208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:09.092 [2024-11-28 09:01:02.963219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:09.092 [2024-11-28 09:01:02.963226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.092 [2024-11-28 09:01:02.963258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:09.092 [2024-11-28 09:01:02.963777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.092 [2024-11-28 09:01:02.963813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:09.092 [2024-11-28 09:01:02.963822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:17:09.092 [2024-11-28 09:01:02.963832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.092 [2024-11-28 09:01:02.964058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.092 [2024-11-28 09:01:02.964070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:09.092 [2024-11-28 09:01:02.964078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:17:09.092 [2024-11-28 09:01:02.964091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.092 [2024-11-28 09:01:02.967700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.092 [2024-11-28 09:01:02.967734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:09.092 [2024-11-28 09:01:02.967741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.592 ms 00:17:09.092 [2024-11-28 09:01:02.967749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.092 [2024-11-28 09:01:02.972908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.092 [2024-11-28 09:01:02.972945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:09.092 [2024-11-28 09:01:02.972956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.126 ms 00:17:09.092 [2024-11-28 09:01:02.972969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.092 [2024-11-28 09:01:02.975630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.092 [2024-11-28 09:01:02.975662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:09.092 [2024-11-28 09:01:02.975669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:17:09.092 [2024-11-28 09:01:02.975678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.092 [2024-11-28 09:01:02.979968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.092 [2024-11-28 09:01:02.979999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:09.092 [2024-11-28 09:01:02.980006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.262 ms 00:17:09.092 [2024-11-28 09:01:02.980015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.092 [2024-11-28 09:01:02.980115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.092 [2024-11-28 09:01:02.980125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:09.092 [2024-11-28 09:01:02.980132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:09.092 [2024-11-28 09:01:02.980140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.092 [2024-11-28 09:01:02.982884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.092 [2024-11-28 09:01:02.982915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:09.092 [2024-11-28 09:01:02.982922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.730 ms 00:17:09.092 [2024-11-28 09:01:02.982934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.092 [2024-11-28 09:01:02.984935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.092 [2024-11-28 09:01:02.984963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:09.092 [2024-11-28 09:01:02.984970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.973 ms 00:17:09.092 [2024-11-28 09:01:02.984978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.092 [2024-11-28 09:01:02.986699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.092 [2024-11-28 09:01:02.986729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:09.092 [2024-11-28 09:01:02.986736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.685 ms 00:17:09.092 [2024-11-28 09:01:02.986743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.092 [2024-11-28 09:01:02.988463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.092 [2024-11-28 09:01:02.988492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:09.092 [2024-11-28 09:01:02.988499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.672 ms 00:17:09.092 [2024-11-28 09:01:02.988505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.092 [2024-11-28 09:01:02.988532] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:09.092 [2024-11-28 09:01:02.988549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:09.092 [2024-11-28 09:01:02.988557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:09.092 [2024-11-28 09:01:02.988566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:09.092 [2024-11-28 09:01:02.988572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:09.092 [2024-11-28 09:01:02.988579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:09.092 [2024-11-28 09:01:02.988585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:09.092 [2024-11-28 09:01:02.988592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:09.092 [2024-11-28 09:01:02.988598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:09.092 [2024-11-28 09:01:02.988606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:09.092 [2024-11-28 09:01:02.988613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:09.092 [2024-11-28 09:01:02.988621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:09.092 [2024-11-28 09:01:02.988628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.988996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:09.093 [2024-11-28 09:01:02.989368] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:09.093 [2024-11-28 09:01:02.989375] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 253ae1f8-61e7-4809-803e-55b1c37dcf6e 00:17:09.094 [2024-11-28 09:01:02.989382] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:09.094 [2024-11-28 09:01:02.989388] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:09.094 [2024-11-28 09:01:02.989396] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:09.094 [2024-11-28 09:01:02.989403] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:09.094 [2024-11-28 09:01:02.989410] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:09.094 [2024-11-28 09:01:02.989418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:09.094 [2024-11-28 09:01:02.989425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:09.094 [2024-11-28 09:01:02.989431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:09.094 [2024-11-28 09:01:02.989437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:09.094 [2024-11-28 09:01:02.989443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.094 [2024-11-28 09:01:02.989451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:09.094 [2024-11-28 09:01:02.989458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:17:09.094 [2024-11-28 09:01:02.989467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:02.991070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.094 [2024-11-28 09:01:02.991094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:09.094 [2024-11-28 09:01:02.991103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.581 ms 00:17:09.094 [2024-11-28 09:01:02.991110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:02.991200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.094 [2024-11-28 09:01:02.991209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:09.094 [2024-11-28 09:01:02.991216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:09.094 [2024-11-28 09:01:02.991223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:02.997367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.094 [2024-11-28 09:01:02.997398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.094 [2024-11-28 09:01:02.997406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.094 [2024-11-28 09:01:02.997414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:02.997481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.094 [2024-11-28 09:01:02.997490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.094 [2024-11-28 09:01:02.997497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.094 [2024-11-28 09:01:02.997507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:02.997544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.094 [2024-11-28 09:01:02.997556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.094 [2024-11-28 09:01:02.997563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.094 [2024-11-28 09:01:02.997570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:02.997584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.094 [2024-11-28 09:01:02.997592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.094 [2024-11-28 09:01:02.997598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.094 [2024-11-28 09:01:02.997606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:03.008672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.094 [2024-11-28 09:01:03.008708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.094 [2024-11-28 09:01:03.008716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.094 [2024-11-28 09:01:03.008724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:03.017488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.094 [2024-11-28 09:01:03.017526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.094 [2024-11-28 09:01:03.017534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.094 [2024-11-28 09:01:03.017544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:03.017585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.094 [2024-11-28 09:01:03.017598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:09.094 [2024-11-28 09:01:03.017607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.094 [2024-11-28 09:01:03.017615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:03.017640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.094 [2024-11-28 09:01:03.017650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:09.094 [2024-11-28 09:01:03.017656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.094 [2024-11-28 09:01:03.017665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:03.017725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.094 [2024-11-28 09:01:03.017736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:09.094 [2024-11-28 09:01:03.017744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.094 [2024-11-28 09:01:03.017752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:03.017778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.094 [2024-11-28 09:01:03.017787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:09.094 [2024-11-28 09:01:03.017793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.094 [2024-11-28 09:01:03.017815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:03.017852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.094 [2024-11-28 09:01:03.017862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:09.094 [2024-11-28 09:01:03.017869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.094 [2024-11-28 09:01:03.017878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:03.017920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.094 [2024-11-28 09:01:03.017939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:09.094 [2024-11-28 09:01:03.017946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.094 [2024-11-28 09:01:03.017954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.094 [2024-11-28 09:01:03.018082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.896 ms, result 0 00:17:09.392 09:01:03 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:09.392 09:01:03 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:09.392 [2024-11-28 09:01:03.300780] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:09.392 [2024-11-28 09:01:03.300926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85991 ] 00:17:09.392 [2024-11-28 09:01:03.447192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.392 [2024-11-28 09:01:03.489110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.650 [2024-11-28 09:01:03.588740] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:09.650 [2024-11-28 09:01:03.588794] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:09.650 [2024-11-28 09:01:03.743118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.650 [2024-11-28 09:01:03.743155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:09.650 [2024-11-28 09:01:03.743166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:09.650 [2024-11-28 09:01:03.743178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.650 [2024-11-28 09:01:03.745092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.650 [2024-11-28 09:01:03.745121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:09.650 [2024-11-28 09:01:03.745131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.902 ms 00:17:09.650 [2024-11-28 09:01:03.745139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.650 [2024-11-28 09:01:03.745197] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:09.650 [2024-11-28 09:01:03.745559] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:09.650 [2024-11-28 09:01:03.745594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.650 [2024-11-28 09:01:03.745602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:09.650 [2024-11-28 09:01:03.745611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:17:09.650 [2024-11-28 09:01:03.745617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.650 [2024-11-28 09:01:03.746975] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:09.650 [2024-11-28 09:01:03.749849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.650 [2024-11-28 09:01:03.749878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:09.650 [2024-11-28 09:01:03.749886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.876 ms 00:17:09.650 [2024-11-28 09:01:03.749897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.650 [2024-11-28 09:01:03.749951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.650 [2024-11-28 09:01:03.749959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:09.650 [2024-11-28 09:01:03.749966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:09.650 [2024-11-28 09:01:03.749972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.650 [2024-11-28 09:01:03.756361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.650 [2024-11-28 09:01:03.756386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:09.650 [2024-11-28 09:01:03.756393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.356 ms 00:17:09.650 [2024-11-28 09:01:03.756399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.650 [2024-11-28 09:01:03.756488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.650 [2024-11-28 09:01:03.756498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:09.650 [2024-11-28 09:01:03.756504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:09.650 [2024-11-28 09:01:03.756511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.650 [2024-11-28 09:01:03.756533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.650 [2024-11-28 09:01:03.756541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:09.650 [2024-11-28 09:01:03.756547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:09.650 [2024-11-28 09:01:03.756554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.650 [2024-11-28 09:01:03.756574] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:09.650 [2024-11-28 09:01:03.758152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.650 [2024-11-28 09:01:03.758174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:09.650 [2024-11-28 09:01:03.758181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:17:09.650 [2024-11-28 09:01:03.758187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.650 [2024-11-28 09:01:03.758233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.650 [2024-11-28 09:01:03.758240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:09.650 [2024-11-28 09:01:03.758248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:09.650 [2024-11-28 09:01:03.758254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.650 [2024-11-28 09:01:03.758268] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:09.650 [2024-11-28 09:01:03.758284] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:09.650 [2024-11-28 09:01:03.758312] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:09.650 [2024-11-28 09:01:03.758327] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:09.650 [2024-11-28 09:01:03.758410] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:09.650 [2024-11-28 09:01:03.758419] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:09.650 [2024-11-28 09:01:03.758427] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:09.650 [2024-11-28 09:01:03.758435] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:09.650 [2024-11-28 09:01:03.758447] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:09.650 [2024-11-28 09:01:03.758454] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:09.650 [2024-11-28 09:01:03.758461] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:09.650 [2024-11-28 09:01:03.758467] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:09.650 [2024-11-28 09:01:03.758474] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:09.650 [2024-11-28 09:01:03.758483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.650 [2024-11-28 09:01:03.758492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:09.650 [2024-11-28 09:01:03.758499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:17:09.650 [2024-11-28 09:01:03.758506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.650 [2024-11-28 09:01:03.758573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.650 [2024-11-28 09:01:03.758586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:09.650 [2024-11-28 09:01:03.758593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:09.650 [2024-11-28 09:01:03.758599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.650 [2024-11-28 09:01:03.758677] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:09.650 [2024-11-28 09:01:03.758694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:09.650 [2024-11-28 09:01:03.758704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:09.650 [2024-11-28 09:01:03.758710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.650 [2024-11-28 09:01:03.758719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:09.650 [2024-11-28 09:01:03.758725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:09.650 [2024-11-28 09:01:03.758731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:09.650 [2024-11-28 09:01:03.758736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:09.650 [2024-11-28 09:01:03.758745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:09.650 [2024-11-28 09:01:03.758751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:09.650 [2024-11-28 09:01:03.758756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:09.651 [2024-11-28 09:01:03.758762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:09.651 [2024-11-28 09:01:03.758767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:09.651 [2024-11-28 09:01:03.758772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:09.651 [2024-11-28 09:01:03.758778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:09.651 [2024-11-28 09:01:03.758783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.651 [2024-11-28 09:01:03.758789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:09.651 [2024-11-28 09:01:03.758808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:09.651 [2024-11-28 09:01:03.758816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.651 [2024-11-28 09:01:03.758822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:09.651 [2024-11-28 09:01:03.758829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:09.651 [2024-11-28 09:01:03.758835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:09.651 [2024-11-28 09:01:03.758842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:09.651 [2024-11-28 09:01:03.758848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:09.651 [2024-11-28 09:01:03.758860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:09.651 [2024-11-28 09:01:03.758866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:09.651 [2024-11-28 09:01:03.758873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:09.651 [2024-11-28 09:01:03.758879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:09.651 [2024-11-28 09:01:03.758887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:09.651 [2024-11-28 09:01:03.758893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:09.651 [2024-11-28 09:01:03.758902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:09.651 [2024-11-28 09:01:03.758908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:09.651 [2024-11-28 09:01:03.758914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:09.651 [2024-11-28 09:01:03.758921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:09.651 [2024-11-28 09:01:03.758927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:09.651 [2024-11-28 09:01:03.758933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:09.651 [2024-11-28 09:01:03.758939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:09.651 [2024-11-28 09:01:03.758945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:09.651 [2024-11-28 09:01:03.758951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:09.651 [2024-11-28 09:01:03.758958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.651 [2024-11-28 09:01:03.758966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:09.651 [2024-11-28 09:01:03.758973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:09.651 [2024-11-28 09:01:03.758979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.651 [2024-11-28 09:01:03.758985] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:09.651 [2024-11-28 09:01:03.758996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:09.651 [2024-11-28 09:01:03.759002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:09.651 [2024-11-28 09:01:03.759009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:09.651 [2024-11-28 09:01:03.759018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:09.651 [2024-11-28 09:01:03.759024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:09.651 [2024-11-28 09:01:03.759031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:09.651 [2024-11-28 09:01:03.759039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:09.651 [2024-11-28 09:01:03.759045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:09.651 [2024-11-28 09:01:03.759051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:09.651 [2024-11-28 09:01:03.759058] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:09.651 [2024-11-28 09:01:03.759067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:09.651 [2024-11-28 09:01:03.759076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:09.651 [2024-11-28 09:01:03.759084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:09.651 [2024-11-28 09:01:03.759093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:09.651 [2024-11-28 09:01:03.759099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:09.651 [2024-11-28 09:01:03.759106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:09.651 [2024-11-28 09:01:03.759114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:09.651 [2024-11-28 09:01:03.759120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:09.651 [2024-11-28 09:01:03.759127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:09.651 [2024-11-28 09:01:03.759133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:09.651 [2024-11-28 09:01:03.759140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:09.651 [2024-11-28 09:01:03.759147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:09.651 [2024-11-28 09:01:03.759155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:09.651 [2024-11-28 09:01:03.759162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:09.651 [2024-11-28 09:01:03.759168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:09.651 [2024-11-28 09:01:03.759175] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:09.651 [2024-11-28 09:01:03.759183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:09.651 [2024-11-28 09:01:03.759190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:09.651 [2024-11-28 09:01:03.759197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:09.651 [2024-11-28 09:01:03.759203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:09.651 [2024-11-28 09:01:03.759208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:09.651 [2024-11-28 09:01:03.759214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.651 [2024-11-28 09:01:03.759222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:09.651 [2024-11-28 09:01:03.759228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:17:09.651 [2024-11-28 09:01:03.759234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.909 [2024-11-28 09:01:03.778143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.909 [2024-11-28 09:01:03.778185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.909 [2024-11-28 09:01:03.778199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.868 ms 00:17:09.909 [2024-11-28 09:01:03.778208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.909 [2024-11-28 09:01:03.778353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.909 [2024-11-28 09:01:03.778374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:09.909 [2024-11-28 09:01:03.778386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:09.909 [2024-11-28 09:01:03.778394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.909 [2024-11-28 09:01:03.788056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.909 [2024-11-28 09:01:03.788084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.909 [2024-11-28 09:01:03.788092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.637 ms 00:17:09.909 [2024-11-28 09:01:03.788099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.909 [2024-11-28 09:01:03.788139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.909 [2024-11-28 09:01:03.788155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.909 [2024-11-28 09:01:03.788162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:09.909 [2024-11-28 09:01:03.788169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.909 [2024-11-28 09:01:03.788566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.788586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.910 [2024-11-28 09:01:03.788594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:17:09.910 [2024-11-28 09:01:03.788600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.788732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.788741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.910 [2024-11-28 09:01:03.788752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:17:09.910 [2024-11-28 09:01:03.788759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.794780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.794821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.910 [2024-11-28 09:01:03.794829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.001 ms 00:17:09.910 [2024-11-28 09:01:03.794835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.797944] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:09.910 [2024-11-28 09:01:03.797972] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:09.910 [2024-11-28 09:01:03.797981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.797988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:09.910 [2024-11-28 09:01:03.797995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.082 ms 00:17:09.910 [2024-11-28 09:01:03.798001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.809908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.809937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:09.910 [2024-11-28 09:01:03.809946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.865 ms 00:17:09.910 [2024-11-28 09:01:03.809953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.812041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.812067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:09.910 [2024-11-28 09:01:03.812074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.032 ms 00:17:09.910 [2024-11-28 09:01:03.812080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.813675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.813703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:09.910 [2024-11-28 09:01:03.813715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.562 ms 00:17:09.910 [2024-11-28 09:01:03.813721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.813987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.813997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:09.910 [2024-11-28 09:01:03.814010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:17:09.910 [2024-11-28 09:01:03.814017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.831880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.831914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:09.910 [2024-11-28 09:01:03.831924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.835 ms 00:17:09.910 [2024-11-28 09:01:03.831930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.838152] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:09.910 [2024-11-28 09:01:03.853133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.853162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:09.910 [2024-11-28 09:01:03.853172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.153 ms 00:17:09.910 [2024-11-28 09:01:03.853178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.853276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.853285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:09.910 [2024-11-28 09:01:03.853293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:09.910 [2024-11-28 09:01:03.853301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.853346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.853355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:09.910 [2024-11-28 09:01:03.853361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:09.910 [2024-11-28 09:01:03.853367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.853384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.853391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:09.910 [2024-11-28 09:01:03.853401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:09.910 [2024-11-28 09:01:03.853408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.853441] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:09.910 [2024-11-28 09:01:03.853448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.853455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:09.910 [2024-11-28 09:01:03.853461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:09.910 [2024-11-28 09:01:03.853470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.857719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.857749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:09.910 [2024-11-28 09:01:03.857763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.234 ms 00:17:09.910 [2024-11-28 09:01:03.857770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.857848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.910 [2024-11-28 09:01:03.857860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:09.910 [2024-11-28 09:01:03.857867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:09.910 [2024-11-28 09:01:03.857876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.910 [2024-11-28 09:01:03.858646] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:09.910 [2024-11-28 09:01:03.859498] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 115.266 ms, result 0 00:17:09.910 [2024-11-28 09:01:03.860349] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:09.910 [2024-11-28 09:01:03.869561] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:10.841  [2024-11-28T09:01:05.905Z] Copying: 16/256 [MB] (16 MBps) [2024-11-28T09:01:07.287Z] Copying: 29/256 [MB] (12 MBps) [2024-11-28T09:01:08.226Z] Copying: 43/256 [MB] (13 MBps) [2024-11-28T09:01:09.168Z] Copying: 54/256 [MB] (11 MBps) [2024-11-28T09:01:10.105Z] Copying: 64/256 [MB] (10 MBps) [2024-11-28T09:01:11.048Z] Copying: 76/256 [MB] (11 MBps) [2024-11-28T09:01:11.986Z] Copying: 86/256 [MB] (10 MBps) [2024-11-28T09:01:12.926Z] Copying: 96/256 [MB] (10 MBps) [2024-11-28T09:01:14.309Z] Copying: 106/256 [MB] (10 MBps) [2024-11-28T09:01:14.882Z] Copying: 116/256 [MB] (10 MBps) [2024-11-28T09:01:16.264Z] Copying: 128/256 [MB] (11 MBps) [2024-11-28T09:01:17.202Z] Copying: 153/256 [MB] (25 MBps) [2024-11-28T09:01:18.138Z] Copying: 165/256 [MB] (12 MBps) [2024-11-28T09:01:19.073Z] Copying: 177/256 [MB] (12 MBps) [2024-11-28T09:01:20.008Z] Copying: 190/256 [MB] (12 MBps) [2024-11-28T09:01:20.968Z] Copying: 201/256 [MB] (11 MBps) [2024-11-28T09:01:21.903Z] Copying: 213/256 [MB] (11 MBps) [2024-11-28T09:01:23.276Z] Copying: 225/256 [MB] (11 MBps) [2024-11-28T09:01:24.220Z] Copying: 237/256 [MB] (11 MBps) [2024-11-28T09:01:24.220Z] Copying: 251/256 [MB] (14 MBps) [2024-11-28T09:01:24.220Z] Copying: 256/256 [MB] (average 12 MBps)[2024-11-28 09:01:24.143100] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:30.100 [2024-11-28 09:01:24.145588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.100 [2024-11-28 09:01:24.145646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:30.100 [2024-11-28 09:01:24.145663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:30.100 [2024-11-28 09:01:24.145673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.100 [2024-11-28 09:01:24.145696] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:30.100 [2024-11-28 09:01:24.146662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.100 [2024-11-28 09:01:24.146702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:30.100 [2024-11-28 09:01:24.146714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:17:30.100 [2024-11-28 09:01:24.146723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.100 [2024-11-28 09:01:24.147019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.100 [2024-11-28 09:01:24.147033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:30.100 [2024-11-28 09:01:24.147043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:17:30.100 [2024-11-28 09:01:24.147051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.100 [2024-11-28 09:01:24.150763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.100 [2024-11-28 09:01:24.150792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:30.100 [2024-11-28 09:01:24.150826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.691 ms 00:17:30.100 [2024-11-28 09:01:24.150834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.100 [2024-11-28 09:01:24.157777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.100 [2024-11-28 09:01:24.157824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:30.100 [2024-11-28 09:01:24.157836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.908 ms 00:17:30.100 [2024-11-28 09:01:24.157844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.100 [2024-11-28 09:01:24.160909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.100 [2024-11-28 09:01:24.160953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:30.100 [2024-11-28 09:01:24.160964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.982 ms 00:17:30.100 [2024-11-28 09:01:24.160986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.100 [2024-11-28 09:01:24.167090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.100 [2024-11-28 09:01:24.167154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:30.100 [2024-11-28 09:01:24.167165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.029 ms 00:17:30.100 [2024-11-28 09:01:24.167175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.100 [2024-11-28 09:01:24.167315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.100 [2024-11-28 09:01:24.167328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:30.100 [2024-11-28 09:01:24.167338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:30.100 [2024-11-28 09:01:24.167346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.100 [2024-11-28 09:01:24.170823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.100 [2024-11-28 09:01:24.170863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:30.100 [2024-11-28 09:01:24.170873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.452 ms 00:17:30.101 [2024-11-28 09:01:24.170881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.101 [2024-11-28 09:01:24.173495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.101 [2024-11-28 09:01:24.173539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:30.101 [2024-11-28 09:01:24.173549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 00:17:30.101 [2024-11-28 09:01:24.173557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.101 [2024-11-28 09:01:24.175855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.101 [2024-11-28 09:01:24.175894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:30.101 [2024-11-28 09:01:24.175904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.252 ms 00:17:30.101 [2024-11-28 09:01:24.175913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.101 [2024-11-28 09:01:24.178234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.101 [2024-11-28 09:01:24.178275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:30.101 [2024-11-28 09:01:24.178285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:17:30.101 [2024-11-28 09:01:24.178293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.101 [2024-11-28 09:01:24.178336] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:30.101 [2024-11-28 09:01:24.178354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:30.101 [2024-11-28 09:01:24.178997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:30.102 [2024-11-28 09:01:24.179190] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:30.102 [2024-11-28 09:01:24.179199] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 253ae1f8-61e7-4809-803e-55b1c37dcf6e 00:17:30.102 [2024-11-28 09:01:24.179219] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:30.102 [2024-11-28 09:01:24.179227] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:30.102 [2024-11-28 09:01:24.179235] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:30.102 [2024-11-28 09:01:24.179244] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:30.102 [2024-11-28 09:01:24.179254] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:30.102 [2024-11-28 09:01:24.179264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:30.102 [2024-11-28 09:01:24.179271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:30.102 [2024-11-28 09:01:24.179278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:30.102 [2024-11-28 09:01:24.179285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:30.102 [2024-11-28 09:01:24.179292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.102 [2024-11-28 09:01:24.179307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:30.102 [2024-11-28 09:01:24.179317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:17:30.102 [2024-11-28 09:01:24.179325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.102 [2024-11-28 09:01:24.182472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.102 [2024-11-28 09:01:24.182509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:30.102 [2024-11-28 09:01:24.182530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.127 ms 00:17:30.102 [2024-11-28 09:01:24.182538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.102 [2024-11-28 09:01:24.182709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.102 [2024-11-28 09:01:24.182731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:30.102 [2024-11-28 09:01:24.182741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:17:30.102 [2024-11-28 09:01:24.182750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.102 [2024-11-28 09:01:24.192725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.102 [2024-11-28 09:01:24.192768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.102 [2024-11-28 09:01:24.192779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.102 [2024-11-28 09:01:24.192788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.102 [2024-11-28 09:01:24.192910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.102 [2024-11-28 09:01:24.192922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.102 [2024-11-28 09:01:24.192930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.102 [2024-11-28 09:01:24.192938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.102 [2024-11-28 09:01:24.192987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.102 [2024-11-28 09:01:24.192997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.102 [2024-11-28 09:01:24.193005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.102 [2024-11-28 09:01:24.193041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.102 [2024-11-28 09:01:24.193061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.102 [2024-11-28 09:01:24.193073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.102 [2024-11-28 09:01:24.193081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.102 [2024-11-28 09:01:24.193090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.102 [2024-11-28 09:01:24.213225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.102 [2024-11-28 09:01:24.213280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.102 [2024-11-28 09:01:24.213293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.102 [2024-11-28 09:01:24.213301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.363 [2024-11-28 09:01:24.229087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.364 [2024-11-28 09:01:24.229140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.364 [2024-11-28 09:01:24.229153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.364 [2024-11-28 09:01:24.229163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.364 [2024-11-28 09:01:24.229242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.364 [2024-11-28 09:01:24.229254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.364 [2024-11-28 09:01:24.229264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.364 [2024-11-28 09:01:24.229273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.364 [2024-11-28 09:01:24.229315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.364 [2024-11-28 09:01:24.229327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.364 [2024-11-28 09:01:24.229341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.364 [2024-11-28 09:01:24.229350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.364 [2024-11-28 09:01:24.229453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.364 [2024-11-28 09:01:24.229467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.364 [2024-11-28 09:01:24.229477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.364 [2024-11-28 09:01:24.229486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.364 [2024-11-28 09:01:24.229526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.364 [2024-11-28 09:01:24.229537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:30.364 [2024-11-28 09:01:24.229545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.364 [2024-11-28 09:01:24.229557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.364 [2024-11-28 09:01:24.229613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.364 [2024-11-28 09:01:24.229624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.364 [2024-11-28 09:01:24.229636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.364 [2024-11-28 09:01:24.229645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.364 [2024-11-28 09:01:24.229711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.364 [2024-11-28 09:01:24.229723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.364 [2024-11-28 09:01:24.229736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.364 [2024-11-28 09:01:24.229747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.364 [2024-11-28 09:01:24.229984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 84.351 ms, result 0 00:17:30.625 00:17:30.625 00:17:30.625 09:01:24 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:30.625 09:01:24 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:31.195 09:01:25 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:31.195 [2024-11-28 09:01:25.280029] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:31.195 [2024-11-28 09:01:25.280219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86226 ] 00:17:31.455 [2024-11-28 09:01:25.434460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.455 [2024-11-28 09:01:25.504619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.716 [2024-11-28 09:01:25.613001] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.716 [2024-11-28 09:01:25.613074] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.716 [2024-11-28 09:01:25.770674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.716 [2024-11-28 09:01:25.770721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:31.716 [2024-11-28 09:01:25.770734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:31.716 [2024-11-28 09:01:25.770743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.716 [2024-11-28 09:01:25.773102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.716 [2024-11-28 09:01:25.773138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.716 [2024-11-28 09:01:25.773151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.342 ms 00:17:31.716 [2024-11-28 09:01:25.773158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.716 [2024-11-28 09:01:25.773230] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:31.717 [2024-11-28 09:01:25.773542] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:31.717 [2024-11-28 09:01:25.773577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-11-28 09:01:25.773585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.717 [2024-11-28 09:01:25.773595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:17:31.717 [2024-11-28 09:01:25.773603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-11-28 09:01:25.775040] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:31.717 [2024-11-28 09:01:25.777914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-11-28 09:01:25.777949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:31.717 [2024-11-28 09:01:25.777963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.876 ms 00:17:31.717 [2024-11-28 09:01:25.777976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-11-28 09:01:25.778032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-11-28 09:01:25.778042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:31.717 [2024-11-28 09:01:25.778050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:31.717 [2024-11-28 09:01:25.778057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-11-28 09:01:25.784369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-11-28 09:01:25.784398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.717 [2024-11-28 09:01:25.784414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.275 ms 00:17:31.717 [2024-11-28 09:01:25.784421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-11-28 09:01:25.784528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-11-28 09:01:25.784541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.717 [2024-11-28 09:01:25.784550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:31.717 [2024-11-28 09:01:25.784560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-11-28 09:01:25.784589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-11-28 09:01:25.784601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:31.717 [2024-11-28 09:01:25.784609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:31.717 [2024-11-28 09:01:25.784616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-11-28 09:01:25.784637] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:31.717 [2024-11-28 09:01:25.786380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-11-28 09:01:25.786406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.717 [2024-11-28 09:01:25.786420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.749 ms 00:17:31.717 [2024-11-28 09:01:25.786429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-11-28 09:01:25.786471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-11-28 09:01:25.786486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:31.717 [2024-11-28 09:01:25.786497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:31.717 [2024-11-28 09:01:25.786504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-11-28 09:01:25.786522] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:31.717 [2024-11-28 09:01:25.786544] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:31.717 [2024-11-28 09:01:25.786579] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:31.717 [2024-11-28 09:01:25.786595] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:31.717 [2024-11-28 09:01:25.786703] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:31.717 [2024-11-28 09:01:25.786714] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:31.717 [2024-11-28 09:01:25.786724] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:31.717 [2024-11-28 09:01:25.786734] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:31.717 [2024-11-28 09:01:25.786743] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:31.717 [2024-11-28 09:01:25.786752] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:31.717 [2024-11-28 09:01:25.786760] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:31.717 [2024-11-28 09:01:25.786768] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:31.717 [2024-11-28 09:01:25.786775] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:31.717 [2024-11-28 09:01:25.786783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-11-28 09:01:25.786794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:31.717 [2024-11-28 09:01:25.786816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:17:31.717 [2024-11-28 09:01:25.786824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-11-28 09:01:25.786911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.717 [2024-11-28 09:01:25.786921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:31.717 [2024-11-28 09:01:25.786933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:31.717 [2024-11-28 09:01:25.786940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.717 [2024-11-28 09:01:25.787042] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:31.717 [2024-11-28 09:01:25.787064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:31.717 [2024-11-28 09:01:25.787073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.717 [2024-11-28 09:01:25.787084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.717 [2024-11-28 09:01:25.787093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:31.717 [2024-11-28 09:01:25.787101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:31.717 [2024-11-28 09:01:25.787109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:31.717 [2024-11-28 09:01:25.787118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:31.717 [2024-11-28 09:01:25.787128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:31.717 [2024-11-28 09:01:25.787136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.717 [2024-11-28 09:01:25.787143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:31.717 [2024-11-28 09:01:25.787151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:31.717 [2024-11-28 09:01:25.787160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.717 [2024-11-28 09:01:25.787168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:31.717 [2024-11-28 09:01:25.787176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:31.717 [2024-11-28 09:01:25.787184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.717 [2024-11-28 09:01:25.787191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:31.717 [2024-11-28 09:01:25.787199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:31.717 [2024-11-28 09:01:25.787207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.717 [2024-11-28 09:01:25.787216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:31.717 [2024-11-28 09:01:25.787224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:31.717 [2024-11-28 09:01:25.787233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.717 [2024-11-28 09:01:25.787241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:31.717 [2024-11-28 09:01:25.787249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:31.717 [2024-11-28 09:01:25.787264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.717 [2024-11-28 09:01:25.787272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:31.717 [2024-11-28 09:01:25.787280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:31.717 [2024-11-28 09:01:25.787287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.717 [2024-11-28 09:01:25.787295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:31.718 [2024-11-28 09:01:25.787303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:31.718 [2024-11-28 09:01:25.787311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.718 [2024-11-28 09:01:25.787319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:31.718 [2024-11-28 09:01:25.787326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:31.718 [2024-11-28 09:01:25.787333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.718 [2024-11-28 09:01:25.787341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:31.718 [2024-11-28 09:01:25.787349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:31.718 [2024-11-28 09:01:25.787356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.718 [2024-11-28 09:01:25.787363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:31.718 [2024-11-28 09:01:25.787371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:31.718 [2024-11-28 09:01:25.787378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.718 [2024-11-28 09:01:25.787387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:31.718 [2024-11-28 09:01:25.787396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:31.718 [2024-11-28 09:01:25.787403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.718 [2024-11-28 09:01:25.787410] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:31.718 [2024-11-28 09:01:25.787419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:31.718 [2024-11-28 09:01:25.787427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.718 [2024-11-28 09:01:25.787435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.718 [2024-11-28 09:01:25.787444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:31.718 [2024-11-28 09:01:25.787453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:31.718 [2024-11-28 09:01:25.787460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:31.718 [2024-11-28 09:01:25.787468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:31.718 [2024-11-28 09:01:25.787476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:31.718 [2024-11-28 09:01:25.787484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:31.718 [2024-11-28 09:01:25.787496] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:31.718 [2024-11-28 09:01:25.787505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.718 [2024-11-28 09:01:25.787514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:31.718 [2024-11-28 09:01:25.787522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:31.718 [2024-11-28 09:01:25.787529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:31.718 [2024-11-28 09:01:25.787537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:31.718 [2024-11-28 09:01:25.787544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:31.718 [2024-11-28 09:01:25.787551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:31.718 [2024-11-28 09:01:25.787558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:31.718 [2024-11-28 09:01:25.787565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:31.718 [2024-11-28 09:01:25.787572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:31.718 [2024-11-28 09:01:25.787580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:31.718 [2024-11-28 09:01:25.787587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:31.718 [2024-11-28 09:01:25.787594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:31.718 [2024-11-28 09:01:25.787600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:31.718 [2024-11-28 09:01:25.787608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:31.718 [2024-11-28 09:01:25.787615] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:31.718 [2024-11-28 09:01:25.787624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.718 [2024-11-28 09:01:25.787631] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:31.718 [2024-11-28 09:01:25.787641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:31.718 [2024-11-28 09:01:25.787648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:31.718 [2024-11-28 09:01:25.787656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:31.718 [2024-11-28 09:01:25.787664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.718 [2024-11-28 09:01:25.787671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:31.718 [2024-11-28 09:01:25.787680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:17:31.718 [2024-11-28 09:01:25.787687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.718 [2024-11-28 09:01:25.807761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.718 [2024-11-28 09:01:25.807814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.718 [2024-11-28 09:01:25.807827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.023 ms 00:17:31.718 [2024-11-28 09:01:25.807835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.718 [2024-11-28 09:01:25.807968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.718 [2024-11-28 09:01:25.807980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:31.718 [2024-11-28 09:01:25.807996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:31.718 [2024-11-28 09:01:25.808005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.718 [2024-11-28 09:01:25.818705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.718 [2024-11-28 09:01:25.818744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.718 [2024-11-28 09:01:25.818757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.678 ms 00:17:31.718 [2024-11-28 09:01:25.818766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.718 [2024-11-28 09:01:25.818854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.718 [2024-11-28 09:01:25.818867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.718 [2024-11-28 09:01:25.818882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:31.718 [2024-11-28 09:01:25.818892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.718 [2024-11-28 09:01:25.819321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.718 [2024-11-28 09:01:25.819348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.718 [2024-11-28 09:01:25.819359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:17:31.718 [2024-11-28 09:01:25.819370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.718 [2024-11-28 09:01:25.819541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.718 [2024-11-28 09:01:25.819553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.718 [2024-11-28 09:01:25.819564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:17:31.718 [2024-11-28 09:01:25.819576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.718 [2024-11-28 09:01:25.825741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.718 [2024-11-28 09:01:25.825778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.718 [2024-11-28 09:01:25.825788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.140 ms 00:17:31.719 [2024-11-28 09:01:25.825817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.719 [2024-11-28 09:01:25.828655] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:31.719 [2024-11-28 09:01:25.828694] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:31.719 [2024-11-28 09:01:25.828709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.719 [2024-11-28 09:01:25.828717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:31.719 [2024-11-28 09:01:25.828726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.808 ms 00:17:31.719 [2024-11-28 09:01:25.828733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.980 [2024-11-28 09:01:25.843374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.980 [2024-11-28 09:01:25.843404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:31.980 [2024-11-28 09:01:25.843416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.595 ms 00:17:31.980 [2024-11-28 09:01:25.843424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.980 [2024-11-28 09:01:25.845496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.980 [2024-11-28 09:01:25.845526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:31.980 [2024-11-28 09:01:25.845534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.007 ms 00:17:31.980 [2024-11-28 09:01:25.845541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.980 [2024-11-28 09:01:25.847210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.980 [2024-11-28 09:01:25.847238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:31.980 [2024-11-28 09:01:25.847254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.633 ms 00:17:31.980 [2024-11-28 09:01:25.847261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.980 [2024-11-28 09:01:25.847576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.980 [2024-11-28 09:01:25.847637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:31.980 [2024-11-28 09:01:25.847646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:17:31.980 [2024-11-28 09:01:25.847656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.980 [2024-11-28 09:01:25.866573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.981 [2024-11-28 09:01:25.866611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:31.981 [2024-11-28 09:01:25.866621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.885 ms 00:17:31.981 [2024-11-28 09:01:25.866629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.981 [2024-11-28 09:01:25.874897] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:31.981 [2024-11-28 09:01:25.891323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.981 [2024-11-28 09:01:25.891363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:31.981 [2024-11-28 09:01:25.891374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.616 ms 00:17:31.981 [2024-11-28 09:01:25.891382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.981 [2024-11-28 09:01:25.891455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.981 [2024-11-28 09:01:25.891469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:31.981 [2024-11-28 09:01:25.891477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:31.981 [2024-11-28 09:01:25.891485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.981 [2024-11-28 09:01:25.891537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.981 [2024-11-28 09:01:25.891546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:31.981 [2024-11-28 09:01:25.891555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:31.981 [2024-11-28 09:01:25.891562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.981 [2024-11-28 09:01:25.891588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.981 [2024-11-28 09:01:25.891596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:31.981 [2024-11-28 09:01:25.891604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:31.981 [2024-11-28 09:01:25.891611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.981 [2024-11-28 09:01:25.891641] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:31.981 [2024-11-28 09:01:25.891653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.981 [2024-11-28 09:01:25.891660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:31.981 [2024-11-28 09:01:25.891668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:31.981 [2024-11-28 09:01:25.891675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.981 [2024-11-28 09:01:25.895945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.981 [2024-11-28 09:01:25.895979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:31.981 [2024-11-28 09:01:25.895989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.248 ms 00:17:31.981 [2024-11-28 09:01:25.895996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.981 [2024-11-28 09:01:25.896083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.981 [2024-11-28 09:01:25.896097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:31.981 [2024-11-28 09:01:25.896106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:31.981 [2024-11-28 09:01:25.896113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.981 [2024-11-28 09:01:25.897088] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:31.981 [2024-11-28 09:01:25.898109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.089 ms, result 0 00:17:31.981 [2024-11-28 09:01:25.899027] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:31.981 [2024-11-28 09:01:25.908406] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:31.981  [2024-11-28T09:01:26.101Z] Copying: 4096/4096 [kB] (average 23 MBps)[2024-11-28 09:01:26.080783] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:31.981 [2024-11-28 09:01:26.081426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.981 [2024-11-28 09:01:26.081459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:31.981 [2024-11-28 09:01:26.081475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:31.981 [2024-11-28 09:01:26.081482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.981 [2024-11-28 09:01:26.081501] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:31.981 [2024-11-28 09:01:26.082064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.981 [2024-11-28 09:01:26.082091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:31.981 [2024-11-28 09:01:26.082100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:17:31.981 [2024-11-28 09:01:26.082108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.981 [2024-11-28 09:01:26.083908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.981 [2024-11-28 09:01:26.083938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:31.981 [2024-11-28 09:01:26.083947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.764 ms 00:17:31.981 [2024-11-28 09:01:26.083955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.981 [2024-11-28 09:01:26.087879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.981 [2024-11-28 09:01:26.087904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:31.981 [2024-11-28 09:01:26.087914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.904 ms 00:17:31.981 [2024-11-28 09:01:26.087922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.981 [2024-11-28 09:01:26.094832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.981 [2024-11-28 09:01:26.094866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:31.981 [2024-11-28 09:01:26.094876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.886 ms 00:17:31.981 [2024-11-28 09:01:26.094884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.981 [2024-11-28 09:01:26.097042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.981 [2024-11-28 09:01:26.097073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:31.981 [2024-11-28 09:01:26.097083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.104 ms 00:17:31.981 [2024-11-28 09:01:26.097100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.248 [2024-11-28 09:01:26.101071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.248 [2024-11-28 09:01:26.101105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:32.248 [2024-11-28 09:01:26.101119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.938 ms 00:17:32.248 [2024-11-28 09:01:26.101130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.248 [2024-11-28 09:01:26.101250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.248 [2024-11-28 09:01:26.101260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:32.248 [2024-11-28 09:01:26.101270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:32.248 [2024-11-28 09:01:26.101278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.248 [2024-11-28 09:01:26.103937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.248 [2024-11-28 09:01:26.103979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:32.248 [2024-11-28 09:01:26.103991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:17:32.248 [2024-11-28 09:01:26.103999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.248 [2024-11-28 09:01:26.106100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.248 [2024-11-28 09:01:26.106131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:32.248 [2024-11-28 09:01:26.106140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.065 ms 00:17:32.248 [2024-11-28 09:01:26.106148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.248 [2024-11-28 09:01:26.107740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.248 [2024-11-28 09:01:26.107772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:32.248 [2024-11-28 09:01:26.107780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.559 ms 00:17:32.248 [2024-11-28 09:01:26.107787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.248 [2024-11-28 09:01:26.109569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.248 [2024-11-28 09:01:26.109600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:32.248 [2024-11-28 09:01:26.109610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:17:32.248 [2024-11-28 09:01:26.109618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.248 [2024-11-28 09:01:26.109648] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:32.248 [2024-11-28 09:01:26.109668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:32.248 [2024-11-28 09:01:26.109833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.109999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:32.249 [2024-11-28 09:01:26.110443] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:32.249 [2024-11-28 09:01:26.110451] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 253ae1f8-61e7-4809-803e-55b1c37dcf6e 00:17:32.249 [2024-11-28 09:01:26.110470] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:32.249 [2024-11-28 09:01:26.110478] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:32.249 [2024-11-28 09:01:26.110485] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:32.249 [2024-11-28 09:01:26.110494] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:32.249 [2024-11-28 09:01:26.110501] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:32.249 [2024-11-28 09:01:26.110509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:32.249 [2024-11-28 09:01:26.110517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:32.249 [2024-11-28 09:01:26.110523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:32.249 [2024-11-28 09:01:26.110530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:32.249 [2024-11-28 09:01:26.110536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.250 [2024-11-28 09:01:26.110544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:32.250 [2024-11-28 09:01:26.110554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:17:32.250 [2024-11-28 09:01:26.110562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.112104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.250 [2024-11-28 09:01:26.112129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:32.250 [2024-11-28 09:01:26.112138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:17:32.250 [2024-11-28 09:01:26.112146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.112239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.250 [2024-11-28 09:01:26.112248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:32.250 [2024-11-28 09:01:26.112259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:32.250 [2024-11-28 09:01:26.112266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.118151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.250 [2024-11-28 09:01:26.118182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.250 [2024-11-28 09:01:26.118192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.250 [2024-11-28 09:01:26.118200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.118262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.250 [2024-11-28 09:01:26.118281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.250 [2024-11-28 09:01:26.118290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.250 [2024-11-28 09:01:26.118297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.118331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.250 [2024-11-28 09:01:26.118342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.250 [2024-11-28 09:01:26.118350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.250 [2024-11-28 09:01:26.118357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.118374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.250 [2024-11-28 09:01:26.118382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.250 [2024-11-28 09:01:26.118392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.250 [2024-11-28 09:01:26.118399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.129856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.250 [2024-11-28 09:01:26.129893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.250 [2024-11-28 09:01:26.129904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.250 [2024-11-28 09:01:26.129912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.138856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.250 [2024-11-28 09:01:26.138897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.250 [2024-11-28 09:01:26.138907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.250 [2024-11-28 09:01:26.138915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.138942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.250 [2024-11-28 09:01:26.138952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.250 [2024-11-28 09:01:26.138964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.250 [2024-11-28 09:01:26.138971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.139000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.250 [2024-11-28 09:01:26.139009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.250 [2024-11-28 09:01:26.139018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.250 [2024-11-28 09:01:26.139027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.139101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.250 [2024-11-28 09:01:26.139113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.250 [2024-11-28 09:01:26.139120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.250 [2024-11-28 09:01:26.139128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.139159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.250 [2024-11-28 09:01:26.139168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:32.250 [2024-11-28 09:01:26.139176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.250 [2024-11-28 09:01:26.139183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.139228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.250 [2024-11-28 09:01:26.139238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.250 [2024-11-28 09:01:26.139246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.250 [2024-11-28 09:01:26.139254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.139305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.250 [2024-11-28 09:01:26.139315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.250 [2024-11-28 09:01:26.139324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.250 [2024-11-28 09:01:26.139336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.250 [2024-11-28 09:01:26.139485] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.023 ms, result 0 00:17:32.573 00:17:32.573 00:17:32.573 09:01:26 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=86246 00:17:32.573 09:01:26 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 86246 00:17:32.573 09:01:26 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86246 ']' 00:17:32.573 09:01:26 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.573 09:01:26 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:32.573 09:01:26 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.573 09:01:26 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.573 09:01:26 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.573 09:01:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:32.573 [2024-11-28 09:01:26.474887] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:32.573 [2024-11-28 09:01:26.475043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86246 ] 00:17:32.573 [2024-11-28 09:01:26.631317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.850 [2024-11-28 09:01:26.703988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.422 09:01:27 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.422 09:01:27 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:33.422 09:01:27 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:33.690 [2024-11-28 09:01:27.553907] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:33.690 [2024-11-28 09:01:27.553992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:33.690 [2024-11-28 09:01:27.732918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.690 [2024-11-28 09:01:27.732976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:33.690 [2024-11-28 09:01:27.732993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:33.690 [2024-11-28 09:01:27.733005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.690 [2024-11-28 09:01:27.735879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.690 [2024-11-28 09:01:27.735936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:33.690 [2024-11-28 09:01:27.735948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.853 ms 00:17:33.690 [2024-11-28 09:01:27.735958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.690 [2024-11-28 09:01:27.736058] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:33.690 [2024-11-28 09:01:27.736473] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:33.690 [2024-11-28 09:01:27.736528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.690 [2024-11-28 09:01:27.736539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:33.690 [2024-11-28 09:01:27.736552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:17:33.690 [2024-11-28 09:01:27.736563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.690 [2024-11-28 09:01:27.738995] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:33.690 [2024-11-28 09:01:27.743767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.690 [2024-11-28 09:01:27.743832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:33.690 [2024-11-28 09:01:27.743846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.768 ms 00:17:33.690 [2024-11-28 09:01:27.743860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.690 [2024-11-28 09:01:27.743955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.690 [2024-11-28 09:01:27.743971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:33.690 [2024-11-28 09:01:27.743990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:33.690 [2024-11-28 09:01:27.743998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.690 [2024-11-28 09:01:27.755606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.690 [2024-11-28 09:01:27.755647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:33.690 [2024-11-28 09:01:27.755662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.552 ms 00:17:33.690 [2024-11-28 09:01:27.755671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.690 [2024-11-28 09:01:27.755816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.690 [2024-11-28 09:01:27.755829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:33.690 [2024-11-28 09:01:27.755842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:33.690 [2024-11-28 09:01:27.755851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.690 [2024-11-28 09:01:27.755889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.690 [2024-11-28 09:01:27.755903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:33.690 [2024-11-28 09:01:27.755913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:33.690 [2024-11-28 09:01:27.755928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.690 [2024-11-28 09:01:27.755958] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:33.690 [2024-11-28 09:01:27.758705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.690 [2024-11-28 09:01:27.758755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:33.690 [2024-11-28 09:01:27.758765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.757 ms 00:17:33.690 [2024-11-28 09:01:27.758777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.690 [2024-11-28 09:01:27.758841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.690 [2024-11-28 09:01:27.758859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:33.690 [2024-11-28 09:01:27.758868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:33.690 [2024-11-28 09:01:27.758881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.690 [2024-11-28 09:01:27.758905] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:33.690 [2024-11-28 09:01:27.758933] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:33.690 [2024-11-28 09:01:27.758989] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:33.690 [2024-11-28 09:01:27.759011] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:33.690 [2024-11-28 09:01:27.759125] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:33.690 [2024-11-28 09:01:27.759162] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:33.690 [2024-11-28 09:01:27.759175] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:33.690 [2024-11-28 09:01:27.759191] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:33.690 [2024-11-28 09:01:27.759201] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:33.690 [2024-11-28 09:01:27.759214] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:33.690 [2024-11-28 09:01:27.759225] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:33.690 [2024-11-28 09:01:27.759235] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:33.690 [2024-11-28 09:01:27.759244] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:33.690 [2024-11-28 09:01:27.759254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.690 [2024-11-28 09:01:27.759266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:33.690 [2024-11-28 09:01:27.759276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:17:33.690 [2024-11-28 09:01:27.759283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.690 [2024-11-28 09:01:27.759375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.690 [2024-11-28 09:01:27.759396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:33.690 [2024-11-28 09:01:27.759411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:33.690 [2024-11-28 09:01:27.759420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.690 [2024-11-28 09:01:27.759531] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:33.690 [2024-11-28 09:01:27.759552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:33.690 [2024-11-28 09:01:27.759568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:33.690 [2024-11-28 09:01:27.759578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.690 [2024-11-28 09:01:27.759592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:33.690 [2024-11-28 09:01:27.759603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:33.690 [2024-11-28 09:01:27.759615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:33.690 [2024-11-28 09:01:27.759623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:33.690 [2024-11-28 09:01:27.759642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:33.690 [2024-11-28 09:01:27.759652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:33.690 [2024-11-28 09:01:27.759664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:33.690 [2024-11-28 09:01:27.759672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:33.690 [2024-11-28 09:01:27.759684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:33.691 [2024-11-28 09:01:27.759693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:33.691 [2024-11-28 09:01:27.759704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:33.691 [2024-11-28 09:01:27.759712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.691 [2024-11-28 09:01:27.759726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:33.691 [2024-11-28 09:01:27.759733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:33.691 [2024-11-28 09:01:27.759743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.691 [2024-11-28 09:01:27.759750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:33.691 [2024-11-28 09:01:27.759764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:33.691 [2024-11-28 09:01:27.759773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:33.691 [2024-11-28 09:01:27.759782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:33.691 [2024-11-28 09:01:27.759789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:33.691 [2024-11-28 09:01:27.759814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:33.691 [2024-11-28 09:01:27.759821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:33.691 [2024-11-28 09:01:27.759831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:33.691 [2024-11-28 09:01:27.759838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:33.691 [2024-11-28 09:01:27.759848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:33.691 [2024-11-28 09:01:27.759856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:33.691 [2024-11-28 09:01:27.759865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:33.691 [2024-11-28 09:01:27.759871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:33.691 [2024-11-28 09:01:27.759882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:33.691 [2024-11-28 09:01:27.759889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:33.691 [2024-11-28 09:01:27.759897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:33.691 [2024-11-28 09:01:27.759904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:33.691 [2024-11-28 09:01:27.759918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:33.691 [2024-11-28 09:01:27.759926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:33.691 [2024-11-28 09:01:27.759934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:33.691 [2024-11-28 09:01:27.759940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.691 [2024-11-28 09:01:27.759949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:33.691 [2024-11-28 09:01:27.759956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:33.691 [2024-11-28 09:01:27.759964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.691 [2024-11-28 09:01:27.759972] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:33.691 [2024-11-28 09:01:27.759982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:33.691 [2024-11-28 09:01:27.759989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:33.691 [2024-11-28 09:01:27.759999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.691 [2024-11-28 09:01:27.760007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:33.691 [2024-11-28 09:01:27.760017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:33.691 [2024-11-28 09:01:27.760025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:33.691 [2024-11-28 09:01:27.760035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:33.691 [2024-11-28 09:01:27.760041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:33.691 [2024-11-28 09:01:27.760055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:33.691 [2024-11-28 09:01:27.760066] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:33.691 [2024-11-28 09:01:27.760078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:33.691 [2024-11-28 09:01:27.760087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:33.691 [2024-11-28 09:01:27.760097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:33.691 [2024-11-28 09:01:27.760105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:33.691 [2024-11-28 09:01:27.760117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:33.691 [2024-11-28 09:01:27.760125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:33.691 [2024-11-28 09:01:27.760134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:33.691 [2024-11-28 09:01:27.760142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:33.691 [2024-11-28 09:01:27.760154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:33.691 [2024-11-28 09:01:27.760161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:33.691 [2024-11-28 09:01:27.760171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:33.691 [2024-11-28 09:01:27.760177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:33.691 [2024-11-28 09:01:27.760190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:33.691 [2024-11-28 09:01:27.760198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:33.691 [2024-11-28 09:01:27.760209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:33.691 [2024-11-28 09:01:27.760216] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:33.691 [2024-11-28 09:01:27.760233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:33.691 [2024-11-28 09:01:27.760244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:33.691 [2024-11-28 09:01:27.760254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:33.691 [2024-11-28 09:01:27.760261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:33.691 [2024-11-28 09:01:27.760273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:33.691 [2024-11-28 09:01:27.760281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.691 [2024-11-28 09:01:27.760292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:33.691 [2024-11-28 09:01:27.760300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.822 ms 00:17:33.691 [2024-11-28 09:01:27.760309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.691 [2024-11-28 09:01:27.780817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.691 [2024-11-28 09:01:27.780866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:33.691 [2024-11-28 09:01:27.780879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.405 ms 00:17:33.691 [2024-11-28 09:01:27.780890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.691 [2024-11-28 09:01:27.781031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.691 [2024-11-28 09:01:27.781077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:33.691 [2024-11-28 09:01:27.781099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:33.691 [2024-11-28 09:01:27.781114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.691 [2024-11-28 09:01:27.797704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.691 [2024-11-28 09:01:27.797746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:33.691 [2024-11-28 09:01:27.797759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.564 ms 00:17:33.691 [2024-11-28 09:01:27.797771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.691 [2024-11-28 09:01:27.797864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.691 [2024-11-28 09:01:27.797882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:33.691 [2024-11-28 09:01:27.797892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:33.691 [2024-11-28 09:01:27.797904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.691 [2024-11-28 09:01:27.798579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.691 [2024-11-28 09:01:27.798623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:33.691 [2024-11-28 09:01:27.798635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:17:33.691 [2024-11-28 09:01:27.798647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.691 [2024-11-28 09:01:27.798846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.691 [2024-11-28 09:01:27.798864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:33.691 [2024-11-28 09:01:27.798877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:17:33.691 [2024-11-28 09:01:27.798888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.829505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.954 [2024-11-28 09:01:27.829572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:33.954 [2024-11-28 09:01:27.829587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.589 ms 00:17:33.954 [2024-11-28 09:01:27.829600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.834550] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:33.954 [2024-11-28 09:01:27.834610] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:33.954 [2024-11-28 09:01:27.834625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.954 [2024-11-28 09:01:27.834638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:33.954 [2024-11-28 09:01:27.834648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.834 ms 00:17:33.954 [2024-11-28 09:01:27.834659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.851459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.954 [2024-11-28 09:01:27.851515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:33.954 [2024-11-28 09:01:27.851528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.708 ms 00:17:33.954 [2024-11-28 09:01:27.851543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.854877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.954 [2024-11-28 09:01:27.854926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:33.954 [2024-11-28 09:01:27.854937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.239 ms 00:17:33.954 [2024-11-28 09:01:27.854947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.857595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.954 [2024-11-28 09:01:27.857645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:33.954 [2024-11-28 09:01:27.857656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.592 ms 00:17:33.954 [2024-11-28 09:01:27.857665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.858054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.954 [2024-11-28 09:01:27.858085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:33.954 [2024-11-28 09:01:27.858096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:17:33.954 [2024-11-28 09:01:27.858107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.889266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.954 [2024-11-28 09:01:27.889332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:33.954 [2024-11-28 09:01:27.889346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.135 ms 00:17:33.954 [2024-11-28 09:01:27.889362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.898397] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:33.954 [2024-11-28 09:01:27.923160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.954 [2024-11-28 09:01:27.923214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:33.954 [2024-11-28 09:01:27.923231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.691 ms 00:17:33.954 [2024-11-28 09:01:27.923241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.923356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.954 [2024-11-28 09:01:27.923369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:33.954 [2024-11-28 09:01:27.923383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:33.954 [2024-11-28 09:01:27.923395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.923471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.954 [2024-11-28 09:01:27.923483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:33.954 [2024-11-28 09:01:27.923504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:33.954 [2024-11-28 09:01:27.923512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.923548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.954 [2024-11-28 09:01:27.923557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:33.954 [2024-11-28 09:01:27.923571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:33.954 [2024-11-28 09:01:27.923579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.923625] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:33.954 [2024-11-28 09:01:27.923641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.954 [2024-11-28 09:01:27.923651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:33.954 [2024-11-28 09:01:27.923660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:33.954 [2024-11-28 09:01:27.923670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.930712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.954 [2024-11-28 09:01:27.930770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:33.954 [2024-11-28 09:01:27.930781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.018 ms 00:17:33.954 [2024-11-28 09:01:27.930792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.930915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.954 [2024-11-28 09:01:27.930934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:33.954 [2024-11-28 09:01:27.930946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:33.954 [2024-11-28 09:01:27.930957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.954 [2024-11-28 09:01:27.932238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:33.954 [2024-11-28 09:01:27.933684] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 198.907 ms, result 0 00:17:33.954 [2024-11-28 09:01:27.935740] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:33.954 Some configs were skipped because the RPC state that can call them passed over. 00:17:33.954 09:01:27 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:34.215 [2024-11-28 09:01:28.165457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.215 [2024-11-28 09:01:28.165511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:34.215 [2024-11-28 09:01:28.165526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.082 ms 00:17:34.215 [2024-11-28 09:01:28.165536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.215 [2024-11-28 09:01:28.165577] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.209 ms, result 0 00:17:34.215 true 00:17:34.215 09:01:28 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:34.477 [2024-11-28 09:01:28.377290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.477 [2024-11-28 09:01:28.377350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:34.477 [2024-11-28 09:01:28.377362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.690 ms 00:17:34.477 [2024-11-28 09:01:28.377373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.477 [2024-11-28 09:01:28.377410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.808 ms, result 0 00:17:34.477 true 00:17:34.477 09:01:28 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 86246 00:17:34.477 09:01:28 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86246 ']' 00:17:34.477 09:01:28 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86246 00:17:34.477 09:01:28 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:34.477 09:01:28 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.477 09:01:28 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86246 00:17:34.477 killing process with pid 86246 00:17:34.477 09:01:28 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:34.477 09:01:28 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:34.477 09:01:28 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86246' 00:17:34.477 09:01:28 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86246 00:17:34.477 09:01:28 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86246 00:17:34.741 [2024-11-28 09:01:28.641370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.741 [2024-11-28 09:01:28.641466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:34.741 [2024-11-28 09:01:28.641487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:34.741 [2024-11-28 09:01:28.641496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.741 [2024-11-28 09:01:28.641530] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:34.741 [2024-11-28 09:01:28.642505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.741 [2024-11-28 09:01:28.642564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:34.741 [2024-11-28 09:01:28.642580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:17:34.741 [2024-11-28 09:01:28.642592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.741 [2024-11-28 09:01:28.642960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.741 [2024-11-28 09:01:28.642988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:34.741 [2024-11-28 09:01:28.642998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:17:34.741 [2024-11-28 09:01:28.643015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.741 [2024-11-28 09:01:28.647617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.741 [2024-11-28 09:01:28.647669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:34.741 [2024-11-28 09:01:28.647680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.580 ms 00:17:34.741 [2024-11-28 09:01:28.647692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.741 [2024-11-28 09:01:28.654785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.741 [2024-11-28 09:01:28.654856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:34.741 [2024-11-28 09:01:28.654874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.046 ms 00:17:34.741 [2024-11-28 09:01:28.654888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.741 [2024-11-28 09:01:28.658173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.741 [2024-11-28 09:01:28.658234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:34.741 [2024-11-28 09:01:28.658247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.194 ms 00:17:34.741 [2024-11-28 09:01:28.658258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.741 [2024-11-28 09:01:28.664425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.741 [2024-11-28 09:01:28.664480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:34.741 [2024-11-28 09:01:28.664492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.117 ms 00:17:34.741 [2024-11-28 09:01:28.664510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.741 [2024-11-28 09:01:28.664663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.741 [2024-11-28 09:01:28.664677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:34.741 [2024-11-28 09:01:28.664688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:34.741 [2024-11-28 09:01:28.664703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.741 [2024-11-28 09:01:28.668247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.741 [2024-11-28 09:01:28.668303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:34.741 [2024-11-28 09:01:28.668315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.524 ms 00:17:34.741 [2024-11-28 09:01:28.668329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.741 [2024-11-28 09:01:28.670307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.741 [2024-11-28 09:01:28.670362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:34.741 [2024-11-28 09:01:28.670374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.928 ms 00:17:34.741 [2024-11-28 09:01:28.670385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.741 [2024-11-28 09:01:28.672775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.741 [2024-11-28 09:01:28.672955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:34.741 [2024-11-28 09:01:28.673021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.341 ms 00:17:34.741 [2024-11-28 09:01:28.673063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.741 [2024-11-28 09:01:28.675476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.741 [2024-11-28 09:01:28.675629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:34.741 [2024-11-28 09:01:28.675692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.148 ms 00:17:34.741 [2024-11-28 09:01:28.675719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.741 [2024-11-28 09:01:28.675985] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:34.741 [2024-11-28 09:01:28.676046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.676226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.676264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.676293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.676328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.676358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.676389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.676464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.676497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.676526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.676984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.677014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.677025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.677034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.677044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.677070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.677081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.677090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.677104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:34.741 [2024-11-28 09:01:28.677112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:34.742 [2024-11-28 09:01:28.677883] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:34.742 [2024-11-28 09:01:28.677892] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 253ae1f8-61e7-4809-803e-55b1c37dcf6e 00:17:34.742 [2024-11-28 09:01:28.677906] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:34.742 [2024-11-28 09:01:28.677914] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:34.742 [2024-11-28 09:01:28.677925] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:34.742 [2024-11-28 09:01:28.677936] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:34.742 [2024-11-28 09:01:28.677996] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:34.742 [2024-11-28 09:01:28.678005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:34.742 [2024-11-28 09:01:28.678019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:34.742 [2024-11-28 09:01:28.678027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:34.742 [2024-11-28 09:01:28.678036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:34.742 [2024-11-28 09:01:28.678046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.743 [2024-11-28 09:01:28.678057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:34.743 [2024-11-28 09:01:28.678070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:17:34.743 [2024-11-28 09:01:28.678110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.681031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.743 [2024-11-28 09:01:28.681096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:34.743 [2024-11-28 09:01:28.681108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.882 ms 00:17:34.743 [2024-11-28 09:01:28.681121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.681295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.743 [2024-11-28 09:01:28.681310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:34.743 [2024-11-28 09:01:28.681321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:17:34.743 [2024-11-28 09:01:28.681333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.692463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.743 [2024-11-28 09:01:28.692516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:34.743 [2024-11-28 09:01:28.692526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.743 [2024-11-28 09:01:28.692539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.692652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.743 [2024-11-28 09:01:28.692667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:34.743 [2024-11-28 09:01:28.692677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.743 [2024-11-28 09:01:28.692691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.692743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.743 [2024-11-28 09:01:28.692759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:34.743 [2024-11-28 09:01:28.692768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.743 [2024-11-28 09:01:28.692779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.692825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.743 [2024-11-28 09:01:28.692837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:34.743 [2024-11-28 09:01:28.692846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.743 [2024-11-28 09:01:28.692859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.713106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.743 [2024-11-28 09:01:28.713170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:34.743 [2024-11-28 09:01:28.713183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.743 [2024-11-28 09:01:28.713196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.729070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.743 [2024-11-28 09:01:28.729126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:34.743 [2024-11-28 09:01:28.729140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.743 [2024-11-28 09:01:28.729156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.729239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.743 [2024-11-28 09:01:28.729254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.743 [2024-11-28 09:01:28.729263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.743 [2024-11-28 09:01:28.729278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.729316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.743 [2024-11-28 09:01:28.729329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.743 [2024-11-28 09:01:28.729340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.743 [2024-11-28 09:01:28.729351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.729443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.743 [2024-11-28 09:01:28.729456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.743 [2024-11-28 09:01:28.729465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.743 [2024-11-28 09:01:28.729478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.729516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.743 [2024-11-28 09:01:28.729528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:34.743 [2024-11-28 09:01:28.729538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.743 [2024-11-28 09:01:28.729552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.729609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.743 [2024-11-28 09:01:28.729635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.743 [2024-11-28 09:01:28.729644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.743 [2024-11-28 09:01:28.729656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.729725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.743 [2024-11-28 09:01:28.729741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.743 [2024-11-28 09:01:28.729751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.743 [2024-11-28 09:01:28.729770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.743 [2024-11-28 09:01:28.729992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 88.584 ms, result 0 00:17:35.004 09:01:29 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:35.266 [2024-11-28 09:01:29.164583] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:35.266 [2024-11-28 09:01:29.164725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86287 ] 00:17:35.266 [2024-11-28 09:01:29.317890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.528 [2024-11-28 09:01:29.388715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.528 [2024-11-28 09:01:29.538422] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:35.528 [2024-11-28 09:01:29.538507] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:35.791 [2024-11-28 09:01:29.702668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.791 [2024-11-28 09:01:29.702728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:35.791 [2024-11-28 09:01:29.702745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:35.791 [2024-11-28 09:01:29.702759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.791 [2024-11-28 09:01:29.705610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.791 [2024-11-28 09:01:29.705655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.791 [2024-11-28 09:01:29.705670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.828 ms 00:17:35.791 [2024-11-28 09:01:29.705678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.791 [2024-11-28 09:01:29.705779] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:35.791 [2024-11-28 09:01:29.706078] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:35.791 [2024-11-28 09:01:29.706102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.791 [2024-11-28 09:01:29.706111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.791 [2024-11-28 09:01:29.706124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:17:35.791 [2024-11-28 09:01:29.706132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.791 [2024-11-28 09:01:29.708475] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:35.791 [2024-11-28 09:01:29.713278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.791 [2024-11-28 09:01:29.713328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:35.791 [2024-11-28 09:01:29.713345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.805 ms 00:17:35.791 [2024-11-28 09:01:29.713358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.791 [2024-11-28 09:01:29.713446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.791 [2024-11-28 09:01:29.713457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:35.791 [2024-11-28 09:01:29.713468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:35.791 [2024-11-28 09:01:29.713480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.791 [2024-11-28 09:01:29.725121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.791 [2024-11-28 09:01:29.725158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.791 [2024-11-28 09:01:29.725171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.590 ms 00:17:35.791 [2024-11-28 09:01:29.725180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.791 [2024-11-28 09:01:29.725341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.791 [2024-11-28 09:01:29.725353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.791 [2024-11-28 09:01:29.725363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:35.791 [2024-11-28 09:01:29.725372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.791 [2024-11-28 09:01:29.725405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.791 [2024-11-28 09:01:29.725419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:35.791 [2024-11-28 09:01:29.725428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:35.791 [2024-11-28 09:01:29.725435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.791 [2024-11-28 09:01:29.725457] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:35.791 [2024-11-28 09:01:29.728192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.791 [2024-11-28 09:01:29.728226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.791 [2024-11-28 09:01:29.728238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.740 ms 00:17:35.791 [2024-11-28 09:01:29.728248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.791 [2024-11-28 09:01:29.728297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.791 [2024-11-28 09:01:29.728313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:35.791 [2024-11-28 09:01:29.728325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:35.791 [2024-11-28 09:01:29.728338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.791 [2024-11-28 09:01:29.728360] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:35.791 [2024-11-28 09:01:29.728385] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:35.791 [2024-11-28 09:01:29.728432] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:35.791 [2024-11-28 09:01:29.728455] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:35.791 [2024-11-28 09:01:29.728570] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:35.791 [2024-11-28 09:01:29.728583] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:35.792 [2024-11-28 09:01:29.728596] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:35.792 [2024-11-28 09:01:29.728611] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:35.792 [2024-11-28 09:01:29.728621] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:35.792 [2024-11-28 09:01:29.728637] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:35.792 [2024-11-28 09:01:29.728645] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:35.792 [2024-11-28 09:01:29.728653] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:35.792 [2024-11-28 09:01:29.728661] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:35.792 [2024-11-28 09:01:29.728670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.792 [2024-11-28 09:01:29.728681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:35.792 [2024-11-28 09:01:29.728694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:17:35.792 [2024-11-28 09:01:29.728703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.792 [2024-11-28 09:01:29.728792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.792 [2024-11-28 09:01:29.728819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:35.792 [2024-11-28 09:01:29.728828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:35.792 [2024-11-28 09:01:29.728841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.792 [2024-11-28 09:01:29.728949] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:35.792 [2024-11-28 09:01:29.728969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:35.792 [2024-11-28 09:01:29.728982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:35.792 [2024-11-28 09:01:29.729000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.792 [2024-11-28 09:01:29.729010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:35.792 [2024-11-28 09:01:29.729019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:35.792 [2024-11-28 09:01:29.729029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:35.792 [2024-11-28 09:01:29.729040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:35.792 [2024-11-28 09:01:29.729052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:35.792 [2024-11-28 09:01:29.729091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:35.792 [2024-11-28 09:01:29.729100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:35.792 [2024-11-28 09:01:29.729109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:35.792 [2024-11-28 09:01:29.729118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:35.792 [2024-11-28 09:01:29.729128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:35.792 [2024-11-28 09:01:29.729141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:35.792 [2024-11-28 09:01:29.729150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.792 [2024-11-28 09:01:29.729158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:35.792 [2024-11-28 09:01:29.729166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:35.792 [2024-11-28 09:01:29.729177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.792 [2024-11-28 09:01:29.729185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:35.792 [2024-11-28 09:01:29.729194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:35.792 [2024-11-28 09:01:29.729202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.792 [2024-11-28 09:01:29.729211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:35.792 [2024-11-28 09:01:29.729219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:35.792 [2024-11-28 09:01:29.729233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.792 [2024-11-28 09:01:29.729243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:35.792 [2024-11-28 09:01:29.729251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:35.792 [2024-11-28 09:01:29.729258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.792 [2024-11-28 09:01:29.729267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:35.792 [2024-11-28 09:01:29.729275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:35.792 [2024-11-28 09:01:29.729285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.792 [2024-11-28 09:01:29.729293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:35.792 [2024-11-28 09:01:29.729301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:35.792 [2024-11-28 09:01:29.729307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:35.792 [2024-11-28 09:01:29.729315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:35.792 [2024-11-28 09:01:29.729322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:35.792 [2024-11-28 09:01:29.729329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:35.792 [2024-11-28 09:01:29.729337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:35.792 [2024-11-28 09:01:29.729345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:35.792 [2024-11-28 09:01:29.729352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.792 [2024-11-28 09:01:29.729361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:35.792 [2024-11-28 09:01:29.729368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:35.792 [2024-11-28 09:01:29.729375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.792 [2024-11-28 09:01:29.729382] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:35.792 [2024-11-28 09:01:29.729391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:35.792 [2024-11-28 09:01:29.729399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:35.792 [2024-11-28 09:01:29.729411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.792 [2024-11-28 09:01:29.729419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:35.792 [2024-11-28 09:01:29.729427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:35.792 [2024-11-28 09:01:29.729434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:35.792 [2024-11-28 09:01:29.729441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:35.792 [2024-11-28 09:01:29.729448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:35.792 [2024-11-28 09:01:29.729455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:35.792 [2024-11-28 09:01:29.729465] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:35.792 [2024-11-28 09:01:29.729478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:35.792 [2024-11-28 09:01:29.729486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:35.792 [2024-11-28 09:01:29.729497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:35.792 [2024-11-28 09:01:29.729504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:35.792 [2024-11-28 09:01:29.729512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:35.792 [2024-11-28 09:01:29.729521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:35.792 [2024-11-28 09:01:29.729529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:35.792 [2024-11-28 09:01:29.729536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:35.792 [2024-11-28 09:01:29.729543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:35.792 [2024-11-28 09:01:29.729550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:35.792 [2024-11-28 09:01:29.729557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:35.792 [2024-11-28 09:01:29.729564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:35.792 [2024-11-28 09:01:29.729574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:35.792 [2024-11-28 09:01:29.729581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:35.792 [2024-11-28 09:01:29.729589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:35.792 [2024-11-28 09:01:29.729597] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:35.792 [2024-11-28 09:01:29.729609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:35.792 [2024-11-28 09:01:29.729622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:35.792 [2024-11-28 09:01:29.729633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:35.792 [2024-11-28 09:01:29.729641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:35.792 [2024-11-28 09:01:29.729649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:35.792 [2024-11-28 09:01:29.729657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.792 [2024-11-28 09:01:29.729666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:35.792 [2024-11-28 09:01:29.729676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:17:35.792 [2024-11-28 09:01:29.729685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.792 [2024-11-28 09:01:29.759390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.792 [2024-11-28 09:01:29.759457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.792 [2024-11-28 09:01:29.759480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.627 ms 00:17:35.792 [2024-11-28 09:01:29.759495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.759752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.759776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:35.793 [2024-11-28 09:01:29.759793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:17:35.793 [2024-11-28 09:01:29.759843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.776234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.776278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.793 [2024-11-28 09:01:29.776290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.353 ms 00:17:35.793 [2024-11-28 09:01:29.776300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.776378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.776390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.793 [2024-11-28 09:01:29.776405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:35.793 [2024-11-28 09:01:29.776414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.777174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.777211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.793 [2024-11-28 09:01:29.777223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:17:35.793 [2024-11-28 09:01:29.777233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.777408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.777421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.793 [2024-11-28 09:01:29.777430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:17:35.793 [2024-11-28 09:01:29.777446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.787794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.787859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.793 [2024-11-28 09:01:29.787876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.322 ms 00:17:35.793 [2024-11-28 09:01:29.787885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.792714] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:35.793 [2024-11-28 09:01:29.792771] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:35.793 [2024-11-28 09:01:29.792785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.792796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:35.793 [2024-11-28 09:01:29.792821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.782 ms 00:17:35.793 [2024-11-28 09:01:29.792830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.809146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.809189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:35.793 [2024-11-28 09:01:29.809199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.235 ms 00:17:35.793 [2024-11-28 09:01:29.809207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.810963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.810991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:35.793 [2024-11-28 09:01:29.810999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.687 ms 00:17:35.793 [2024-11-28 09:01:29.811006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.812476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.812503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:35.793 [2024-11-28 09:01:29.812518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:17:35.793 [2024-11-28 09:01:29.812526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.812856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.812875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:35.793 [2024-11-28 09:01:29.812887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:17:35.793 [2024-11-28 09:01:29.812898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.832832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.832866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:35.793 [2024-11-28 09:01:29.832878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.911 ms 00:17:35.793 [2024-11-28 09:01:29.832885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.840670] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:35.793 [2024-11-28 09:01:29.857813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.857841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:35.793 [2024-11-28 09:01:29.857853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.869 ms 00:17:35.793 [2024-11-28 09:01:29.857861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.857937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.857952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:35.793 [2024-11-28 09:01:29.857965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:35.793 [2024-11-28 09:01:29.857973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.858030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.858043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:35.793 [2024-11-28 09:01:29.858052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:35.793 [2024-11-28 09:01:29.858059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.858079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.858089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:35.793 [2024-11-28 09:01:29.858097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:35.793 [2024-11-28 09:01:29.858105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.858138] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:35.793 [2024-11-28 09:01:29.858150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.858158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:35.793 [2024-11-28 09:01:29.858166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:35.793 [2024-11-28 09:01:29.858173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.862829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.862857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:35.793 [2024-11-28 09:01:29.862868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.634 ms 00:17:35.793 [2024-11-28 09:01:29.862876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.862954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.793 [2024-11-28 09:01:29.862968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:35.793 [2024-11-28 09:01:29.862980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:35.793 [2024-11-28 09:01:29.862989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.793 [2024-11-28 09:01:29.863895] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:35.793 [2024-11-28 09:01:29.864935] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 160.908 ms, result 0 00:17:35.793 [2024-11-28 09:01:29.866311] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:35.793 [2024-11-28 09:01:29.874297] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:37.179  [2024-11-28T09:01:32.243Z] Copying: 16/256 [MB] (16 MBps) [2024-11-28T09:01:33.183Z] Copying: 35/256 [MB] (18 MBps) [2024-11-28T09:01:34.127Z] Copying: 51/256 [MB] (16 MBps) [2024-11-28T09:01:35.068Z] Copying: 69/256 [MB] (17 MBps) [2024-11-28T09:01:36.006Z] Copying: 86/256 [MB] (17 MBps) [2024-11-28T09:01:36.942Z] Copying: 100/256 [MB] (14 MBps) [2024-11-28T09:01:38.323Z] Copying: 112/256 [MB] (12 MBps) [2024-11-28T09:01:39.258Z] Copying: 123/256 [MB] (10 MBps) [2024-11-28T09:01:40.195Z] Copying: 135/256 [MB] (11 MBps) [2024-11-28T09:01:41.130Z] Copying: 146/256 [MB] (11 MBps) [2024-11-28T09:01:42.069Z] Copying: 157/256 [MB] (11 MBps) [2024-11-28T09:01:43.004Z] Copying: 168/256 [MB] (11 MBps) [2024-11-28T09:01:43.942Z] Copying: 180/256 [MB] (12 MBps) [2024-11-28T09:01:45.319Z] Copying: 192/256 [MB] (11 MBps) [2024-11-28T09:01:46.256Z] Copying: 204/256 [MB] (11 MBps) [2024-11-28T09:01:47.197Z] Copying: 215/256 [MB] (10 MBps) [2024-11-28T09:01:48.134Z] Copying: 227/256 [MB] (12 MBps) [2024-11-28T09:01:49.072Z] Copying: 239/256 [MB] (12 MBps) [2024-11-28T09:01:49.642Z] Copying: 250/256 [MB] (10 MBps) [2024-11-28T09:01:49.903Z] Copying: 256/256 [MB] (average 13 MBps)[2024-11-28 09:01:49.788126] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:55.783 [2024-11-28 09:01:49.789557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.783 [2024-11-28 09:01:49.789588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:55.783 [2024-11-28 09:01:49.789608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:55.783 [2024-11-28 09:01:49.789616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.783 [2024-11-28 09:01:49.789634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:55.783 [2024-11-28 09:01:49.790182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.783 [2024-11-28 09:01:49.790208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:55.783 [2024-11-28 09:01:49.790216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:17:55.783 [2024-11-28 09:01:49.790222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.783 [2024-11-28 09:01:49.790448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.783 [2024-11-28 09:01:49.790462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:55.783 [2024-11-28 09:01:49.790475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:17:55.783 [2024-11-28 09:01:49.790482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.783 [2024-11-28 09:01:49.793245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.783 [2024-11-28 09:01:49.793263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:55.783 [2024-11-28 09:01:49.793271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.747 ms 00:17:55.783 [2024-11-28 09:01:49.793279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.783 [2024-11-28 09:01:49.799093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.783 [2024-11-28 09:01:49.799118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:55.783 [2024-11-28 09:01:49.799126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.799 ms 00:17:55.783 [2024-11-28 09:01:49.799138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.783 [2024-11-28 09:01:49.800910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.783 [2024-11-28 09:01:49.800938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:55.783 [2024-11-28 09:01:49.800946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.713 ms 00:17:55.783 [2024-11-28 09:01:49.800961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.783 [2024-11-28 09:01:49.804491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.783 [2024-11-28 09:01:49.804521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:55.783 [2024-11-28 09:01:49.804535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.500 ms 00:17:55.783 [2024-11-28 09:01:49.804541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.783 [2024-11-28 09:01:49.804635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.783 [2024-11-28 09:01:49.804643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:55.783 [2024-11-28 09:01:49.804650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:55.783 [2024-11-28 09:01:49.804656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.783 [2024-11-28 09:01:49.807382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.783 [2024-11-28 09:01:49.807416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:55.783 [2024-11-28 09:01:49.807425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.710 ms 00:17:55.783 [2024-11-28 09:01:49.807431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.783 [2024-11-28 09:01:49.809398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.783 [2024-11-28 09:01:49.809425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:55.783 [2024-11-28 09:01:49.809432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.936 ms 00:17:55.783 [2024-11-28 09:01:49.809438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.783 [2024-11-28 09:01:49.810996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.783 [2024-11-28 09:01:49.811022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:55.783 [2024-11-28 09:01:49.811030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.531 ms 00:17:55.783 [2024-11-28 09:01:49.811035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.783 [2024-11-28 09:01:49.812694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.783 [2024-11-28 09:01:49.812721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:55.783 [2024-11-28 09:01:49.812729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:17:55.783 [2024-11-28 09:01:49.812736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.783 [2024-11-28 09:01:49.812761] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:55.783 [2024-11-28 09:01:49.812780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.812994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.813000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.813006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.813011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.813017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:55.783 [2024-11-28 09:01:49.813022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:55.784 [2024-11-28 09:01:49.813419] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:55.784 [2024-11-28 09:01:49.813426] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 253ae1f8-61e7-4809-803e-55b1c37dcf6e 00:17:55.784 [2024-11-28 09:01:49.813439] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:55.784 [2024-11-28 09:01:49.813445] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:55.784 [2024-11-28 09:01:49.813450] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:55.784 [2024-11-28 09:01:49.813457] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:55.784 [2024-11-28 09:01:49.813462] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:55.784 [2024-11-28 09:01:49.813468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:55.784 [2024-11-28 09:01:49.813476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:55.784 [2024-11-28 09:01:49.813481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:55.784 [2024-11-28 09:01:49.813486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:55.784 [2024-11-28 09:01:49.813492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.784 [2024-11-28 09:01:49.813498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:55.784 [2024-11-28 09:01:49.813508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:17:55.784 [2024-11-28 09:01:49.813514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.784 [2024-11-28 09:01:49.815337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.784 [2024-11-28 09:01:49.815358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:55.784 [2024-11-28 09:01:49.815366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.809 ms 00:17:55.784 [2024-11-28 09:01:49.815372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.784 [2024-11-28 09:01:49.815461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.784 [2024-11-28 09:01:49.815472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:55.784 [2024-11-28 09:01:49.815480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:55.784 [2024-11-28 09:01:49.815487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.784 [2024-11-28 09:01:49.821149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.784 [2024-11-28 09:01:49.821176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:55.784 [2024-11-28 09:01:49.821184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.784 [2024-11-28 09:01:49.821191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.784 [2024-11-28 09:01:49.821258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.784 [2024-11-28 09:01:49.821279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:55.784 [2024-11-28 09:01:49.821285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.784 [2024-11-28 09:01:49.821291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.784 [2024-11-28 09:01:49.821324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.784 [2024-11-28 09:01:49.821332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:55.784 [2024-11-28 09:01:49.821339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.785 [2024-11-28 09:01:49.821345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.785 [2024-11-28 09:01:49.821361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.785 [2024-11-28 09:01:49.821368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:55.785 [2024-11-28 09:01:49.821376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.785 [2024-11-28 09:01:49.821382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.785 [2024-11-28 09:01:49.832907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.785 [2024-11-28 09:01:49.832940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:55.785 [2024-11-28 09:01:49.832949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.785 [2024-11-28 09:01:49.832956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.785 [2024-11-28 09:01:49.842125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.785 [2024-11-28 09:01:49.842164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:55.785 [2024-11-28 09:01:49.842173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.785 [2024-11-28 09:01:49.842180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.785 [2024-11-28 09:01:49.842212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.785 [2024-11-28 09:01:49.842222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.785 [2024-11-28 09:01:49.842228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.785 [2024-11-28 09:01:49.842237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.785 [2024-11-28 09:01:49.842262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.785 [2024-11-28 09:01:49.842268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.785 [2024-11-28 09:01:49.842275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.785 [2024-11-28 09:01:49.842283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.785 [2024-11-28 09:01:49.842339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.785 [2024-11-28 09:01:49.842348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.785 [2024-11-28 09:01:49.842355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.785 [2024-11-28 09:01:49.842362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.785 [2024-11-28 09:01:49.842389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.785 [2024-11-28 09:01:49.842397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:55.785 [2024-11-28 09:01:49.842403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.785 [2024-11-28 09:01:49.842409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.785 [2024-11-28 09:01:49.842454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.785 [2024-11-28 09:01:49.842461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.785 [2024-11-28 09:01:49.842468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.785 [2024-11-28 09:01:49.842473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.785 [2024-11-28 09:01:49.842516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.785 [2024-11-28 09:01:49.842524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.785 [2024-11-28 09:01:49.842531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.785 [2024-11-28 09:01:49.842540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.785 [2024-11-28 09:01:49.842661] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.084 ms, result 0 00:17:56.044 00:17:56.044 00:17:56.044 09:01:50 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:56.613 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:56.613 09:01:50 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:56.613 09:01:50 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:56.613 09:01:50 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:56.613 09:01:50 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:56.613 09:01:50 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:56.613 09:01:50 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:56.613 09:01:50 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 86246 00:17:56.613 09:01:50 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86246 ']' 00:17:56.613 09:01:50 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86246 00:17:56.613 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86246) - No such process 00:17:56.613 Process with pid 86246 is not found 00:17:56.613 09:01:50 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 86246 is not found' 00:17:56.613 00:17:56.613 real 1m24.179s 00:17:56.613 user 1m43.354s 00:17:56.613 sys 0m5.972s 00:17:56.613 09:01:50 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:56.613 ************************************ 00:17:56.613 END TEST ftl_trim 00:17:56.613 ************************************ 00:17:56.613 09:01:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:56.876 09:01:50 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:56.876 09:01:50 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:56.876 09:01:50 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:56.876 09:01:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:56.876 ************************************ 00:17:56.876 START TEST ftl_restore 00:17:56.876 ************************************ 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:56.876 * Looking for test storage... 00:17:56.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:56.876 09:01:50 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:56.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.876 --rc genhtml_branch_coverage=1 00:17:56.876 --rc genhtml_function_coverage=1 00:17:56.876 --rc genhtml_legend=1 00:17:56.876 --rc geninfo_all_blocks=1 00:17:56.876 --rc geninfo_unexecuted_blocks=1 00:17:56.876 00:17:56.876 ' 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:56.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.876 --rc genhtml_branch_coverage=1 00:17:56.876 --rc genhtml_function_coverage=1 00:17:56.876 --rc genhtml_legend=1 00:17:56.876 --rc geninfo_all_blocks=1 00:17:56.876 --rc geninfo_unexecuted_blocks=1 00:17:56.876 00:17:56.876 ' 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:56.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.876 --rc genhtml_branch_coverage=1 00:17:56.876 --rc genhtml_function_coverage=1 00:17:56.876 --rc genhtml_legend=1 00:17:56.876 --rc geninfo_all_blocks=1 00:17:56.876 --rc geninfo_unexecuted_blocks=1 00:17:56.876 00:17:56.876 ' 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:56.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.876 --rc genhtml_branch_coverage=1 00:17:56.876 --rc genhtml_function_coverage=1 00:17:56.876 --rc genhtml_legend=1 00:17:56.876 --rc geninfo_all_blocks=1 00:17:56.876 --rc geninfo_unexecuted_blocks=1 00:17:56.876 00:17:56.876 ' 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.zGRFHKYcVq 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:56.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=86579 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 86579 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 86579 ']' 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.876 09:01:50 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:56.876 09:01:50 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:57.138 [2024-11-28 09:01:51.009879] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:57.138 [2024-11-28 09:01:51.009999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86579 ] 00:17:57.138 [2024-11-28 09:01:51.156153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.138 [2024-11-28 09:01:51.215718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.083 09:01:51 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.083 09:01:51 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:17:58.084 09:01:51 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:58.084 09:01:51 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:58.084 09:01:51 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:58.084 09:01:51 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:58.084 09:01:51 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:58.084 09:01:51 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:58.084 09:01:52 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:58.084 09:01:52 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:58.084 09:01:52 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:58.084 09:01:52 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:58.084 09:01:52 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:58.084 09:01:52 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:58.084 09:01:52 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:58.084 09:01:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:58.345 09:01:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:58.345 { 00:17:58.345 "name": "nvme0n1", 00:17:58.345 "aliases": [ 00:17:58.345 "baa828b8-b685-41b6-84fc-d57b5523794f" 00:17:58.345 ], 00:17:58.345 "product_name": "NVMe disk", 00:17:58.345 "block_size": 4096, 00:17:58.345 "num_blocks": 1310720, 00:17:58.345 "uuid": "baa828b8-b685-41b6-84fc-d57b5523794f", 00:17:58.345 "numa_id": -1, 00:17:58.345 "assigned_rate_limits": { 00:17:58.345 "rw_ios_per_sec": 0, 00:17:58.345 "rw_mbytes_per_sec": 0, 00:17:58.345 "r_mbytes_per_sec": 0, 00:17:58.345 "w_mbytes_per_sec": 0 00:17:58.345 }, 00:17:58.345 "claimed": true, 00:17:58.345 "claim_type": "read_many_write_one", 00:17:58.346 "zoned": false, 00:17:58.346 "supported_io_types": { 00:17:58.346 "read": true, 00:17:58.346 "write": true, 00:17:58.346 "unmap": true, 00:17:58.346 "flush": true, 00:17:58.346 "reset": true, 00:17:58.346 "nvme_admin": true, 00:17:58.346 "nvme_io": true, 00:17:58.346 "nvme_io_md": false, 00:17:58.346 "write_zeroes": true, 00:17:58.346 "zcopy": false, 00:17:58.346 "get_zone_info": false, 00:17:58.346 "zone_management": false, 00:17:58.346 "zone_append": false, 00:17:58.346 "compare": true, 00:17:58.346 "compare_and_write": false, 00:17:58.346 "abort": true, 00:17:58.346 "seek_hole": false, 00:17:58.346 "seek_data": false, 00:17:58.346 "copy": true, 00:17:58.346 "nvme_iov_md": false 00:17:58.346 }, 00:17:58.346 "driver_specific": { 00:17:58.346 "nvme": [ 00:17:58.346 { 00:17:58.346 "pci_address": "0000:00:11.0", 00:17:58.346 "trid": { 00:17:58.346 "trtype": "PCIe", 00:17:58.346 "traddr": "0000:00:11.0" 00:17:58.346 }, 00:17:58.346 "ctrlr_data": { 00:17:58.346 "cntlid": 0, 00:17:58.346 "vendor_id": "0x1b36", 00:17:58.346 "model_number": "QEMU NVMe Ctrl", 00:17:58.346 "serial_number": "12341", 00:17:58.346 "firmware_revision": "8.0.0", 00:17:58.346 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:58.346 "oacs": { 00:17:58.346 "security": 0, 00:17:58.346 "format": 1, 00:17:58.346 "firmware": 0, 00:17:58.346 "ns_manage": 1 00:17:58.346 }, 00:17:58.346 "multi_ctrlr": false, 00:17:58.346 "ana_reporting": false 00:17:58.346 }, 00:17:58.346 "vs": { 00:17:58.346 "nvme_version": "1.4" 00:17:58.346 }, 00:17:58.346 "ns_data": { 00:17:58.346 "id": 1, 00:17:58.346 "can_share": false 00:17:58.346 } 00:17:58.346 } 00:17:58.346 ], 00:17:58.346 "mp_policy": "active_passive" 00:17:58.346 } 00:17:58.346 } 00:17:58.346 ]' 00:17:58.346 09:01:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:58.346 09:01:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:58.346 09:01:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:58.346 09:01:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:58.346 09:01:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:58.346 09:01:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:17:58.346 09:01:52 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:58.346 09:01:52 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:58.346 09:01:52 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:58.346 09:01:52 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:58.346 09:01:52 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:58.607 09:01:52 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=add5ba8c-dfae-4d00-9e39-afc542ed44ff 00:17:58.607 09:01:52 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:58.607 09:01:52 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u add5ba8c-dfae-4d00-9e39-afc542ed44ff 00:17:58.869 09:01:52 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:59.128 09:01:53 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=496dffea-bf54-4f50-9f93-4501063ee912 00:17:59.128 09:01:53 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 496dffea-bf54-4f50-9f93-4501063ee912 00:17:59.387 09:01:53 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=d632abd9-9c0d-43a9-9862-6e8831f14497 00:17:59.387 09:01:53 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:59.387 09:01:53 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d632abd9-9c0d-43a9-9862-6e8831f14497 00:17:59.387 09:01:53 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:59.387 09:01:53 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:59.387 09:01:53 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=d632abd9-9c0d-43a9-9862-6e8831f14497 00:17:59.387 09:01:53 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:59.387 09:01:53 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size d632abd9-9c0d-43a9-9862-6e8831f14497 00:17:59.387 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=d632abd9-9c0d-43a9-9862-6e8831f14497 00:17:59.387 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:59.387 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:59.387 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:59.387 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d632abd9-9c0d-43a9-9862-6e8831f14497 00:17:59.646 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:59.646 { 00:17:59.646 "name": "d632abd9-9c0d-43a9-9862-6e8831f14497", 00:17:59.646 "aliases": [ 00:17:59.646 "lvs/nvme0n1p0" 00:17:59.646 ], 00:17:59.646 "product_name": "Logical Volume", 00:17:59.646 "block_size": 4096, 00:17:59.646 "num_blocks": 26476544, 00:17:59.646 "uuid": "d632abd9-9c0d-43a9-9862-6e8831f14497", 00:17:59.646 "assigned_rate_limits": { 00:17:59.646 "rw_ios_per_sec": 0, 00:17:59.646 "rw_mbytes_per_sec": 0, 00:17:59.646 "r_mbytes_per_sec": 0, 00:17:59.646 "w_mbytes_per_sec": 0 00:17:59.646 }, 00:17:59.646 "claimed": false, 00:17:59.646 "zoned": false, 00:17:59.646 "supported_io_types": { 00:17:59.646 "read": true, 00:17:59.646 "write": true, 00:17:59.646 "unmap": true, 00:17:59.646 "flush": false, 00:17:59.646 "reset": true, 00:17:59.646 "nvme_admin": false, 00:17:59.646 "nvme_io": false, 00:17:59.646 "nvme_io_md": false, 00:17:59.646 "write_zeroes": true, 00:17:59.646 "zcopy": false, 00:17:59.646 "get_zone_info": false, 00:17:59.646 "zone_management": false, 00:17:59.646 "zone_append": false, 00:17:59.646 "compare": false, 00:17:59.646 "compare_and_write": false, 00:17:59.646 "abort": false, 00:17:59.646 "seek_hole": true, 00:17:59.646 "seek_data": true, 00:17:59.646 "copy": false, 00:17:59.646 "nvme_iov_md": false 00:17:59.646 }, 00:17:59.646 "driver_specific": { 00:17:59.646 "lvol": { 00:17:59.646 "lvol_store_uuid": "496dffea-bf54-4f50-9f93-4501063ee912", 00:17:59.646 "base_bdev": "nvme0n1", 00:17:59.646 "thin_provision": true, 00:17:59.646 "num_allocated_clusters": 0, 00:17:59.646 "snapshot": false, 00:17:59.646 "clone": false, 00:17:59.646 "esnap_clone": false 00:17:59.646 } 00:17:59.646 } 00:17:59.646 } 00:17:59.646 ]' 00:17:59.646 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:59.646 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:59.646 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:59.646 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:59.646 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:59.646 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:59.646 09:01:53 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:59.646 09:01:53 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:59.646 09:01:53 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:59.905 09:01:53 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:59.905 09:01:53 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:59.905 09:01:53 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size d632abd9-9c0d-43a9-9862-6e8831f14497 00:17:59.905 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=d632abd9-9c0d-43a9-9862-6e8831f14497 00:17:59.905 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:59.905 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:59.905 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:59.905 09:01:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d632abd9-9c0d-43a9-9862-6e8831f14497 00:18:00.163 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:00.163 { 00:18:00.163 "name": "d632abd9-9c0d-43a9-9862-6e8831f14497", 00:18:00.163 "aliases": [ 00:18:00.163 "lvs/nvme0n1p0" 00:18:00.163 ], 00:18:00.163 "product_name": "Logical Volume", 00:18:00.163 "block_size": 4096, 00:18:00.163 "num_blocks": 26476544, 00:18:00.163 "uuid": "d632abd9-9c0d-43a9-9862-6e8831f14497", 00:18:00.163 "assigned_rate_limits": { 00:18:00.163 "rw_ios_per_sec": 0, 00:18:00.163 "rw_mbytes_per_sec": 0, 00:18:00.163 "r_mbytes_per_sec": 0, 00:18:00.163 "w_mbytes_per_sec": 0 00:18:00.163 }, 00:18:00.163 "claimed": false, 00:18:00.163 "zoned": false, 00:18:00.163 "supported_io_types": { 00:18:00.163 "read": true, 00:18:00.163 "write": true, 00:18:00.163 "unmap": true, 00:18:00.163 "flush": false, 00:18:00.163 "reset": true, 00:18:00.163 "nvme_admin": false, 00:18:00.163 "nvme_io": false, 00:18:00.163 "nvme_io_md": false, 00:18:00.163 "write_zeroes": true, 00:18:00.163 "zcopy": false, 00:18:00.163 "get_zone_info": false, 00:18:00.163 "zone_management": false, 00:18:00.163 "zone_append": false, 00:18:00.163 "compare": false, 00:18:00.163 "compare_and_write": false, 00:18:00.163 "abort": false, 00:18:00.163 "seek_hole": true, 00:18:00.163 "seek_data": true, 00:18:00.163 "copy": false, 00:18:00.163 "nvme_iov_md": false 00:18:00.163 }, 00:18:00.163 "driver_specific": { 00:18:00.163 "lvol": { 00:18:00.163 "lvol_store_uuid": "496dffea-bf54-4f50-9f93-4501063ee912", 00:18:00.163 "base_bdev": "nvme0n1", 00:18:00.163 "thin_provision": true, 00:18:00.163 "num_allocated_clusters": 0, 00:18:00.163 "snapshot": false, 00:18:00.163 "clone": false, 00:18:00.163 "esnap_clone": false 00:18:00.163 } 00:18:00.163 } 00:18:00.163 } 00:18:00.163 ]' 00:18:00.163 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:00.163 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:00.163 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:00.163 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:00.163 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:00.163 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:00.163 09:01:54 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:00.163 09:01:54 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:00.422 09:01:54 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:00.422 09:01:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size d632abd9-9c0d-43a9-9862-6e8831f14497 00:18:00.422 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=d632abd9-9c0d-43a9-9862-6e8831f14497 00:18:00.422 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:00.422 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:00.422 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:00.422 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d632abd9-9c0d-43a9-9862-6e8831f14497 00:18:00.422 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:00.422 { 00:18:00.422 "name": "d632abd9-9c0d-43a9-9862-6e8831f14497", 00:18:00.422 "aliases": [ 00:18:00.422 "lvs/nvme0n1p0" 00:18:00.422 ], 00:18:00.422 "product_name": "Logical Volume", 00:18:00.422 "block_size": 4096, 00:18:00.422 "num_blocks": 26476544, 00:18:00.422 "uuid": "d632abd9-9c0d-43a9-9862-6e8831f14497", 00:18:00.422 "assigned_rate_limits": { 00:18:00.422 "rw_ios_per_sec": 0, 00:18:00.422 "rw_mbytes_per_sec": 0, 00:18:00.422 "r_mbytes_per_sec": 0, 00:18:00.422 "w_mbytes_per_sec": 0 00:18:00.422 }, 00:18:00.422 "claimed": false, 00:18:00.422 "zoned": false, 00:18:00.422 "supported_io_types": { 00:18:00.422 "read": true, 00:18:00.422 "write": true, 00:18:00.422 "unmap": true, 00:18:00.422 "flush": false, 00:18:00.422 "reset": true, 00:18:00.422 "nvme_admin": false, 00:18:00.422 "nvme_io": false, 00:18:00.422 "nvme_io_md": false, 00:18:00.422 "write_zeroes": true, 00:18:00.422 "zcopy": false, 00:18:00.422 "get_zone_info": false, 00:18:00.422 "zone_management": false, 00:18:00.422 "zone_append": false, 00:18:00.422 "compare": false, 00:18:00.422 "compare_and_write": false, 00:18:00.422 "abort": false, 00:18:00.422 "seek_hole": true, 00:18:00.422 "seek_data": true, 00:18:00.422 "copy": false, 00:18:00.422 "nvme_iov_md": false 00:18:00.422 }, 00:18:00.422 "driver_specific": { 00:18:00.422 "lvol": { 00:18:00.422 "lvol_store_uuid": "496dffea-bf54-4f50-9f93-4501063ee912", 00:18:00.422 "base_bdev": "nvme0n1", 00:18:00.422 "thin_provision": true, 00:18:00.422 "num_allocated_clusters": 0, 00:18:00.422 "snapshot": false, 00:18:00.422 "clone": false, 00:18:00.422 "esnap_clone": false 00:18:00.422 } 00:18:00.422 } 00:18:00.422 } 00:18:00.422 ]' 00:18:00.422 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:00.423 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:00.683 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:00.683 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:00.683 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:00.683 09:01:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:00.683 09:01:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:00.683 09:01:54 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d632abd9-9c0d-43a9-9862-6e8831f14497 --l2p_dram_limit 10' 00:18:00.683 09:01:54 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:00.683 09:01:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:00.683 09:01:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:00.683 09:01:54 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:00.683 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:00.683 09:01:54 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d632abd9-9c0d-43a9-9862-6e8831f14497 --l2p_dram_limit 10 -c nvc0n1p0 00:18:00.683 [2024-11-28 09:01:54.747535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.683 [2024-11-28 09:01:54.747577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:00.683 [2024-11-28 09:01:54.747589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:00.683 [2024-11-28 09:01:54.747598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.683 [2024-11-28 09:01:54.747634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.683 [2024-11-28 09:01:54.747644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.683 [2024-11-28 09:01:54.747650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:00.683 [2024-11-28 09:01:54.747662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.683 [2024-11-28 09:01:54.747683] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:00.683 [2024-11-28 09:01:54.747874] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:00.683 [2024-11-28 09:01:54.747887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.683 [2024-11-28 09:01:54.747896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.683 [2024-11-28 09:01:54.747905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:18:00.683 [2024-11-28 09:01:54.747916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.683 [2024-11-28 09:01:54.747939] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cbd333f7-9e85-49f2-b58d-d1b0fe0ae743 00:18:00.683 [2024-11-28 09:01:54.749265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.683 [2024-11-28 09:01:54.749288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:00.683 [2024-11-28 09:01:54.749297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:00.683 [2024-11-28 09:01:54.749304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.683 [2024-11-28 09:01:54.756112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.683 [2024-11-28 09:01:54.756135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.683 [2024-11-28 09:01:54.756144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.744 ms 00:18:00.683 [2024-11-28 09:01:54.756154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.683 [2024-11-28 09:01:54.756218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.683 [2024-11-28 09:01:54.756225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.683 [2024-11-28 09:01:54.756234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:00.683 [2024-11-28 09:01:54.756242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.683 [2024-11-28 09:01:54.756279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.683 [2024-11-28 09:01:54.756287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:00.683 [2024-11-28 09:01:54.756296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:00.683 [2024-11-28 09:01:54.756302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.683 [2024-11-28 09:01:54.756319] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:00.683 [2024-11-28 09:01:54.757952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.683 [2024-11-28 09:01:54.757978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.683 [2024-11-28 09:01:54.757987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.638 ms 00:18:00.683 [2024-11-28 09:01:54.757995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.683 [2024-11-28 09:01:54.758021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.683 [2024-11-28 09:01:54.758030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:00.683 [2024-11-28 09:01:54.758036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:00.683 [2024-11-28 09:01:54.758046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.683 [2024-11-28 09:01:54.758062] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:00.683 [2024-11-28 09:01:54.758191] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:00.683 [2024-11-28 09:01:54.758202] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:00.683 [2024-11-28 09:01:54.758216] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:00.683 [2024-11-28 09:01:54.758224] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:00.683 [2024-11-28 09:01:54.758233] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:00.683 [2024-11-28 09:01:54.758239] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:00.683 [2024-11-28 09:01:54.758248] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:00.683 [2024-11-28 09:01:54.758254] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:00.683 [2024-11-28 09:01:54.758261] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:00.683 [2024-11-28 09:01:54.758268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.683 [2024-11-28 09:01:54.758276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:00.683 [2024-11-28 09:01:54.758282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:18:00.683 [2024-11-28 09:01:54.758291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.683 [2024-11-28 09:01:54.758356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.683 [2024-11-28 09:01:54.758375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:00.683 [2024-11-28 09:01:54.758386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:00.683 [2024-11-28 09:01:54.758394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.683 [2024-11-28 09:01:54.758467] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:00.683 [2024-11-28 09:01:54.758484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:00.683 [2024-11-28 09:01:54.758490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.683 [2024-11-28 09:01:54.758498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.683 [2024-11-28 09:01:54.758504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:00.683 [2024-11-28 09:01:54.758511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:00.683 [2024-11-28 09:01:54.758517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:00.683 [2024-11-28 09:01:54.758525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:00.683 [2024-11-28 09:01:54.758531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:00.683 [2024-11-28 09:01:54.758538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.683 [2024-11-28 09:01:54.758543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:00.684 [2024-11-28 09:01:54.758549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:00.684 [2024-11-28 09:01:54.758555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.684 [2024-11-28 09:01:54.758564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:00.684 [2024-11-28 09:01:54.758570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:00.684 [2024-11-28 09:01:54.758578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.684 [2024-11-28 09:01:54.758583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:00.684 [2024-11-28 09:01:54.758590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:00.684 [2024-11-28 09:01:54.758596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.684 [2024-11-28 09:01:54.758603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:00.684 [2024-11-28 09:01:54.758609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:00.684 [2024-11-28 09:01:54.758617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.684 [2024-11-28 09:01:54.758623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:00.684 [2024-11-28 09:01:54.758631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:00.684 [2024-11-28 09:01:54.758636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.684 [2024-11-28 09:01:54.758644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:00.684 [2024-11-28 09:01:54.758650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:00.684 [2024-11-28 09:01:54.758660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.684 [2024-11-28 09:01:54.758666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:00.684 [2024-11-28 09:01:54.758677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:00.684 [2024-11-28 09:01:54.758684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.684 [2024-11-28 09:01:54.758691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:00.684 [2024-11-28 09:01:54.758697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:00.684 [2024-11-28 09:01:54.758705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.684 [2024-11-28 09:01:54.758712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:00.684 [2024-11-28 09:01:54.758719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:00.684 [2024-11-28 09:01:54.758725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.684 [2024-11-28 09:01:54.758733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:00.684 [2024-11-28 09:01:54.758738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:00.684 [2024-11-28 09:01:54.758746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.684 [2024-11-28 09:01:54.758752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:00.684 [2024-11-28 09:01:54.758759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:00.684 [2024-11-28 09:01:54.758766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.684 [2024-11-28 09:01:54.758773] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:00.684 [2024-11-28 09:01:54.758781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:00.684 [2024-11-28 09:01:54.758791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.684 [2024-11-28 09:01:54.758809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.684 [2024-11-28 09:01:54.758818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:00.684 [2024-11-28 09:01:54.758824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:00.684 [2024-11-28 09:01:54.758832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:00.684 [2024-11-28 09:01:54.758838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:00.684 [2024-11-28 09:01:54.758846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:00.684 [2024-11-28 09:01:54.758853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:00.684 [2024-11-28 09:01:54.758865] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:00.684 [2024-11-28 09:01:54.758876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.684 [2024-11-28 09:01:54.758886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:00.684 [2024-11-28 09:01:54.758893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:00.684 [2024-11-28 09:01:54.758901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:00.684 [2024-11-28 09:01:54.758908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:00.684 [2024-11-28 09:01:54.758916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:00.684 [2024-11-28 09:01:54.758923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:00.684 [2024-11-28 09:01:54.758934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:00.684 [2024-11-28 09:01:54.758940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:00.684 [2024-11-28 09:01:54.758950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:00.684 [2024-11-28 09:01:54.758956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:00.684 [2024-11-28 09:01:54.758965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:00.684 [2024-11-28 09:01:54.758972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:00.684 [2024-11-28 09:01:54.758980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:00.684 [2024-11-28 09:01:54.758985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:00.684 [2024-11-28 09:01:54.758992] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:00.684 [2024-11-28 09:01:54.759000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.684 [2024-11-28 09:01:54.759008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:00.684 [2024-11-28 09:01:54.759013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:00.684 [2024-11-28 09:01:54.759021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:00.684 [2024-11-28 09:01:54.759026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:00.684 [2024-11-28 09:01:54.759035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.684 [2024-11-28 09:01:54.759043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:00.684 [2024-11-28 09:01:54.759053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:18:00.684 [2024-11-28 09:01:54.759058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.684 [2024-11-28 09:01:54.759089] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:00.684 [2024-11-28 09:01:54.759100] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:04.943 [2024-11-28 09:01:58.385219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.385294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:04.943 [2024-11-28 09:01:58.385312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3626.083 ms 00:18:04.943 [2024-11-28 09:01:58.385320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.395968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.396007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:04.943 [2024-11-28 09:01:58.396021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.561 ms 00:18:04.943 [2024-11-28 09:01:58.396028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.396121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.396133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:04.943 [2024-11-28 09:01:58.396145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:04.943 [2024-11-28 09:01:58.396151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.405305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.405337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:04.943 [2024-11-28 09:01:58.405347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.119 ms 00:18:04.943 [2024-11-28 09:01:58.405354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.405380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.405390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:04.943 [2024-11-28 09:01:58.405398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:04.943 [2024-11-28 09:01:58.405404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.405810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.405912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:04.943 [2024-11-28 09:01:58.405922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:18:04.943 [2024-11-28 09:01:58.405930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.406020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.406034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:04.943 [2024-11-28 09:01:58.406046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:04.943 [2024-11-28 09:01:58.406079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.419572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.419616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:04.943 [2024-11-28 09:01:58.419633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.471 ms 00:18:04.943 [2024-11-28 09:01:58.419644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.428504] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:04.943 [2024-11-28 09:01:58.431452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.431481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:04.943 [2024-11-28 09:01:58.431490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.677 ms 00:18:04.943 [2024-11-28 09:01:58.431498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.501185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.501221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:04.943 [2024-11-28 09:01:58.501231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.662 ms 00:18:04.943 [2024-11-28 09:01:58.501242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.501399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.501411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:04.943 [2024-11-28 09:01:58.501418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:18:04.943 [2024-11-28 09:01:58.501427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.504971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.505001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:04.943 [2024-11-28 09:01:58.505010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.529 ms 00:18:04.943 [2024-11-28 09:01:58.505019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.507998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.508027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:04.943 [2024-11-28 09:01:58.508035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.949 ms 00:18:04.943 [2024-11-28 09:01:58.508043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.508291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.508307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:04.943 [2024-11-28 09:01:58.508314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:18:04.943 [2024-11-28 09:01:58.508325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.540462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.540494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:04.943 [2024-11-28 09:01:58.540503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.113 ms 00:18:04.943 [2024-11-28 09:01:58.540511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.545196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.545226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:04.943 [2024-11-28 09:01:58.545234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.640 ms 00:18:04.943 [2024-11-28 09:01:58.545243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.548683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.548712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:04.943 [2024-11-28 09:01:58.548720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.412 ms 00:18:04.943 [2024-11-28 09:01:58.548728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.552606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.552637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:04.943 [2024-11-28 09:01:58.552645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.852 ms 00:18:04.943 [2024-11-28 09:01:58.552655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.552685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.552695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:04.943 [2024-11-28 09:01:58.552702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:04.943 [2024-11-28 09:01:58.552710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.552776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.943 [2024-11-28 09:01:58.552785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:04.943 [2024-11-28 09:01:58.552792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:04.943 [2024-11-28 09:01:58.552817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.943 [2024-11-28 09:01:58.553668] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3805.728 ms, result 0 00:18:04.943 { 00:18:04.943 "name": "ftl0", 00:18:04.943 "uuid": "cbd333f7-9e85-49f2-b58d-d1b0fe0ae743" 00:18:04.943 } 00:18:04.943 09:01:58 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:04.943 09:01:58 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:04.943 09:01:58 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:04.943 09:01:58 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:04.943 [2024-11-28 09:01:58.949483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.944 [2024-11-28 09:01:58.949514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:04.944 [2024-11-28 09:01:58.949524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:04.944 [2024-11-28 09:01:58.949531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.944 [2024-11-28 09:01:58.949552] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:04.944 [2024-11-28 09:01:58.950109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.944 [2024-11-28 09:01:58.950128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:04.944 [2024-11-28 09:01:58.950135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:18:04.944 [2024-11-28 09:01:58.950145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.944 [2024-11-28 09:01:58.950339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.944 [2024-11-28 09:01:58.950350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:04.944 [2024-11-28 09:01:58.950358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:18:04.944 [2024-11-28 09:01:58.950367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.944 [2024-11-28 09:01:58.952778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.944 [2024-11-28 09:01:58.952809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:04.944 [2024-11-28 09:01:58.952817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.397 ms 00:18:04.944 [2024-11-28 09:01:58.952826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.944 [2024-11-28 09:01:58.957426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.944 [2024-11-28 09:01:58.957451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:04.944 [2024-11-28 09:01:58.957459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.588 ms 00:18:04.944 [2024-11-28 09:01:58.957468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.944 [2024-11-28 09:01:58.959258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.944 [2024-11-28 09:01:58.959288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:04.944 [2024-11-28 09:01:58.959296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:18:04.944 [2024-11-28 09:01:58.959303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.944 [2024-11-28 09:01:58.964180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.944 [2024-11-28 09:01:58.964210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:04.944 [2024-11-28 09:01:58.964219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.828 ms 00:18:04.944 [2024-11-28 09:01:58.964227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.944 [2024-11-28 09:01:58.964319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.944 [2024-11-28 09:01:58.964335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:04.944 [2024-11-28 09:01:58.964343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:04.944 [2024-11-28 09:01:58.964352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.944 [2024-11-28 09:01:58.966400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.944 [2024-11-28 09:01:58.966429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:04.944 [2024-11-28 09:01:58.966436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.028 ms 00:18:04.944 [2024-11-28 09:01:58.966444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.944 [2024-11-28 09:01:58.968300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.944 [2024-11-28 09:01:58.968330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:04.944 [2024-11-28 09:01:58.968338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.800 ms 00:18:04.944 [2024-11-28 09:01:58.968346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.944 [2024-11-28 09:01:58.970095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.944 [2024-11-28 09:01:58.970123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:04.944 [2024-11-28 09:01:58.970130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.702 ms 00:18:04.944 [2024-11-28 09:01:58.970137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.944 [2024-11-28 09:01:58.971575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.944 [2024-11-28 09:01:58.971603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:04.944 [2024-11-28 09:01:58.971610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.394 ms 00:18:04.944 [2024-11-28 09:01:58.971618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.944 [2024-11-28 09:01:58.971670] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:04.944 [2024-11-28 09:01:58.971688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.971995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.972002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.972008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.972048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.972059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.972067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.972072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.972080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.972085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.972093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:04.944 [2024-11-28 09:01:58.972099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:04.945 [2024-11-28 09:01:58.972465] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:04.945 [2024-11-28 09:01:58.972471] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbd333f7-9e85-49f2-b58d-d1b0fe0ae743 00:18:04.945 [2024-11-28 09:01:58.972479] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:04.945 [2024-11-28 09:01:58.972485] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:04.945 [2024-11-28 09:01:58.972492] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:04.945 [2024-11-28 09:01:58.972498] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:04.945 [2024-11-28 09:01:58.972507] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:04.945 [2024-11-28 09:01:58.972513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:04.945 [2024-11-28 09:01:58.972521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:04.945 [2024-11-28 09:01:58.972526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:04.945 [2024-11-28 09:01:58.972532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:04.945 [2024-11-28 09:01:58.972538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.945 [2024-11-28 09:01:58.972550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:04.945 [2024-11-28 09:01:58.972557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:18:04.945 [2024-11-28 09:01:58.972564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.945 [2024-11-28 09:01:58.974109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.945 [2024-11-28 09:01:58.974133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:04.945 [2024-11-28 09:01:58.974141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.531 ms 00:18:04.945 [2024-11-28 09:01:58.974149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.945 [2024-11-28 09:01:58.974210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.945 [2024-11-28 09:01:58.974224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:04.945 [2024-11-28 09:01:58.974231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:04.945 [2024-11-28 09:01:58.974238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.945 [2024-11-28 09:01:58.980252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.945 [2024-11-28 09:01:58.980284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:04.945 [2024-11-28 09:01:58.980292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.945 [2024-11-28 09:01:58.980300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.945 [2024-11-28 09:01:58.980347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.945 [2024-11-28 09:01:58.980355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:04.945 [2024-11-28 09:01:58.980361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.945 [2024-11-28 09:01:58.980369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.945 [2024-11-28 09:01:58.980410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.945 [2024-11-28 09:01:58.980427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:04.945 [2024-11-28 09:01:58.980437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.945 [2024-11-28 09:01:58.980445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.945 [2024-11-28 09:01:58.980459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.945 [2024-11-28 09:01:58.980473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:04.945 [2024-11-28 09:01:58.980480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.945 [2024-11-28 09:01:58.980488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.945 [2024-11-28 09:01:58.991245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.945 [2024-11-28 09:01:58.991282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:04.945 [2024-11-28 09:01:58.991291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.945 [2024-11-28 09:01:58.991299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.945 [2024-11-28 09:01:59.000311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.945 [2024-11-28 09:01:59.000348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:04.945 [2024-11-28 09:01:59.000356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.945 [2024-11-28 09:01:59.000367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.945 [2024-11-28 09:01:59.000434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.945 [2024-11-28 09:01:59.000446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:04.946 [2024-11-28 09:01:59.000453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.946 [2024-11-28 09:01:59.000461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.946 [2024-11-28 09:01:59.000493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.946 [2024-11-28 09:01:59.000503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:04.946 [2024-11-28 09:01:59.000512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.946 [2024-11-28 09:01:59.000519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.946 [2024-11-28 09:01:59.000580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.946 [2024-11-28 09:01:59.000592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:04.946 [2024-11-28 09:01:59.000600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.946 [2024-11-28 09:01:59.000608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.946 [2024-11-28 09:01:59.000634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.946 [2024-11-28 09:01:59.000645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:04.946 [2024-11-28 09:01:59.000656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.946 [2024-11-28 09:01:59.000667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.946 [2024-11-28 09:01:59.000703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.946 [2024-11-28 09:01:59.000718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:04.946 [2024-11-28 09:01:59.000725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.946 [2024-11-28 09:01:59.000733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.946 [2024-11-28 09:01:59.000774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.946 [2024-11-28 09:01:59.000784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:04.946 [2024-11-28 09:01:59.000794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.946 [2024-11-28 09:01:59.000815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.946 [2024-11-28 09:01:59.000943] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.417 ms, result 0 00:18:04.946 true 00:18:04.946 09:01:59 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 86579 00:18:04.946 09:01:59 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86579 ']' 00:18:04.946 09:01:59 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86579 00:18:04.946 09:01:59 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:04.946 09:01:59 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:04.946 09:01:59 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86579 00:18:04.946 09:01:59 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:04.946 killing process with pid 86579 00:18:04.946 09:01:59 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:04.946 09:01:59 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86579' 00:18:04.946 09:01:59 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 86579 00:18:04.946 09:01:59 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 86579 00:18:10.234 09:02:03 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:14.441 262144+0 records in 00:18:14.441 262144+0 records out 00:18:14.441 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.08384 s, 263 MB/s 00:18:14.441 09:02:08 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:16.355 09:02:10 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:16.355 [2024-11-28 09:02:10.115333] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:16.355 [2024-11-28 09:02:10.115425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86799 ] 00:18:16.355 [2024-11-28 09:02:10.258237] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.355 [2024-11-28 09:02:10.305036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.355 [2024-11-28 09:02:10.433235] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:16.355 [2024-11-28 09:02:10.433327] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:16.619 [2024-11-28 09:02:10.597228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.619 [2024-11-28 09:02:10.597289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:16.619 [2024-11-28 09:02:10.597310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:16.619 [2024-11-28 09:02:10.597320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.620 [2024-11-28 09:02:10.597381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.620 [2024-11-28 09:02:10.597422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:16.620 [2024-11-28 09:02:10.597431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:16.620 [2024-11-28 09:02:10.597444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.620 [2024-11-28 09:02:10.597472] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:16.620 [2024-11-28 09:02:10.597795] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:16.620 [2024-11-28 09:02:10.597851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.620 [2024-11-28 09:02:10.597860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:16.620 [2024-11-28 09:02:10.597874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:18:16.620 [2024-11-28 09:02:10.597893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.620 [2024-11-28 09:02:10.600189] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:16.620 [2024-11-28 09:02:10.604871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.620 [2024-11-28 09:02:10.604919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:16.620 [2024-11-28 09:02:10.604932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.683 ms 00:18:16.620 [2024-11-28 09:02:10.604942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.620 [2024-11-28 09:02:10.605032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.620 [2024-11-28 09:02:10.605044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:16.620 [2024-11-28 09:02:10.605066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:16.620 [2024-11-28 09:02:10.605074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.620 [2024-11-28 09:02:10.616791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.620 [2024-11-28 09:02:10.616859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:16.620 [2024-11-28 09:02:10.616872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.666 ms 00:18:16.620 [2024-11-28 09:02:10.616886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.620 [2024-11-28 09:02:10.616997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.620 [2024-11-28 09:02:10.617008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:16.620 [2024-11-28 09:02:10.617017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:16.620 [2024-11-28 09:02:10.617026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.620 [2024-11-28 09:02:10.617095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.620 [2024-11-28 09:02:10.617108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:16.620 [2024-11-28 09:02:10.617118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:16.620 [2024-11-28 09:02:10.617127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.620 [2024-11-28 09:02:10.617156] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:16.620 [2024-11-28 09:02:10.619885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.620 [2024-11-28 09:02:10.619927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:16.620 [2024-11-28 09:02:10.619938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.739 ms 00:18:16.620 [2024-11-28 09:02:10.619946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.620 [2024-11-28 09:02:10.619995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.620 [2024-11-28 09:02:10.620006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:16.620 [2024-11-28 09:02:10.620015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:16.620 [2024-11-28 09:02:10.620023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.620 [2024-11-28 09:02:10.620049] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:16.620 [2024-11-28 09:02:10.620086] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:16.620 [2024-11-28 09:02:10.620129] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:16.620 [2024-11-28 09:02:10.620149] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:16.620 [2024-11-28 09:02:10.620267] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:16.620 [2024-11-28 09:02:10.620280] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:16.620 [2024-11-28 09:02:10.620293] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:16.620 [2024-11-28 09:02:10.620309] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:16.620 [2024-11-28 09:02:10.620323] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:16.620 [2024-11-28 09:02:10.620334] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:16.620 [2024-11-28 09:02:10.620342] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:16.620 [2024-11-28 09:02:10.620351] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:16.620 [2024-11-28 09:02:10.620365] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:16.620 [2024-11-28 09:02:10.620374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.620 [2024-11-28 09:02:10.620385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:16.620 [2024-11-28 09:02:10.620399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:18:16.620 [2024-11-28 09:02:10.620407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.620 [2024-11-28 09:02:10.620496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.620 [2024-11-28 09:02:10.620521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:16.620 [2024-11-28 09:02:10.620530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:16.620 [2024-11-28 09:02:10.620540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.620 [2024-11-28 09:02:10.620644] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:16.620 [2024-11-28 09:02:10.620658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:16.620 [2024-11-28 09:02:10.620669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:16.620 [2024-11-28 09:02:10.620686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.620 [2024-11-28 09:02:10.620695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:16.620 [2024-11-28 09:02:10.620703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:16.620 [2024-11-28 09:02:10.620710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:16.620 [2024-11-28 09:02:10.620720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:16.620 [2024-11-28 09:02:10.620729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:16.620 [2024-11-28 09:02:10.620736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:16.620 [2024-11-28 09:02:10.620743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:16.620 [2024-11-28 09:02:10.620750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:16.620 [2024-11-28 09:02:10.620760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:16.620 [2024-11-28 09:02:10.620769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:16.620 [2024-11-28 09:02:10.620780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:16.620 [2024-11-28 09:02:10.620789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.620 [2024-11-28 09:02:10.620815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:16.620 [2024-11-28 09:02:10.620823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:16.620 [2024-11-28 09:02:10.620831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.620 [2024-11-28 09:02:10.620838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:16.620 [2024-11-28 09:02:10.620848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:16.620 [2024-11-28 09:02:10.620856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.620 [2024-11-28 09:02:10.620863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:16.620 [2024-11-28 09:02:10.620872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:16.620 [2024-11-28 09:02:10.620880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.620 [2024-11-28 09:02:10.620888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:16.620 [2024-11-28 09:02:10.620895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:16.620 [2024-11-28 09:02:10.620903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.620 [2024-11-28 09:02:10.620919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:16.620 [2024-11-28 09:02:10.620928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:16.620 [2024-11-28 09:02:10.620935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.620 [2024-11-28 09:02:10.620943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:16.620 [2024-11-28 09:02:10.620949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:16.620 [2024-11-28 09:02:10.620956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:16.620 [2024-11-28 09:02:10.620963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:16.620 [2024-11-28 09:02:10.620969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:16.620 [2024-11-28 09:02:10.620977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:16.620 [2024-11-28 09:02:10.620985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:16.620 [2024-11-28 09:02:10.620992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:16.620 [2024-11-28 09:02:10.620998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.620 [2024-11-28 09:02:10.621005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:16.620 [2024-11-28 09:02:10.621012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:16.620 [2024-11-28 09:02:10.621019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.621 [2024-11-28 09:02:10.621026] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:16.621 [2024-11-28 09:02:10.621038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:16.621 [2024-11-28 09:02:10.621048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:16.621 [2024-11-28 09:02:10.621061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.621 [2024-11-28 09:02:10.621072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:16.621 [2024-11-28 09:02:10.621079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:16.621 [2024-11-28 09:02:10.621086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:16.621 [2024-11-28 09:02:10.621095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:16.621 [2024-11-28 09:02:10.621102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:16.621 [2024-11-28 09:02:10.621109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:16.621 [2024-11-28 09:02:10.621118] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:16.621 [2024-11-28 09:02:10.621127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:16.621 [2024-11-28 09:02:10.621136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:16.621 [2024-11-28 09:02:10.621145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:16.621 [2024-11-28 09:02:10.621154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:16.621 [2024-11-28 09:02:10.621161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:16.621 [2024-11-28 09:02:10.621169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:16.621 [2024-11-28 09:02:10.621180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:16.621 [2024-11-28 09:02:10.621188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:16.621 [2024-11-28 09:02:10.621198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:16.621 [2024-11-28 09:02:10.621207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:16.621 [2024-11-28 09:02:10.621214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:16.621 [2024-11-28 09:02:10.621223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:16.621 [2024-11-28 09:02:10.621233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:16.621 [2024-11-28 09:02:10.621242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:16.621 [2024-11-28 09:02:10.621252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:16.621 [2024-11-28 09:02:10.621260] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:16.621 [2024-11-28 09:02:10.621269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:16.621 [2024-11-28 09:02:10.621278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:16.621 [2024-11-28 09:02:10.621287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:16.621 [2024-11-28 09:02:10.621295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:16.621 [2024-11-28 09:02:10.621304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:16.621 [2024-11-28 09:02:10.621313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.621 [2024-11-28 09:02:10.621324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:16.621 [2024-11-28 09:02:10.621336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:18:16.621 [2024-11-28 09:02:10.621345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.621 [2024-11-28 09:02:10.651483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.621 [2024-11-28 09:02:10.651561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:16.621 [2024-11-28 09:02:10.651596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.032 ms 00:18:16.621 [2024-11-28 09:02:10.651613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.621 [2024-11-28 09:02:10.651824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.621 [2024-11-28 09:02:10.651846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:16.621 [2024-11-28 09:02:10.651865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:18:16.621 [2024-11-28 09:02:10.651879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.621 [2024-11-28 09:02:10.668432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.621 [2024-11-28 09:02:10.668482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:16.621 [2024-11-28 09:02:10.668494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.394 ms 00:18:16.621 [2024-11-28 09:02:10.668510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.621 [2024-11-28 09:02:10.668551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.621 [2024-11-28 09:02:10.668561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:16.621 [2024-11-28 09:02:10.668571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:16.621 [2024-11-28 09:02:10.668583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.621 [2024-11-28 09:02:10.669329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.621 [2024-11-28 09:02:10.669379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:16.621 [2024-11-28 09:02:10.669421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:18:16.621 [2024-11-28 09:02:10.669431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.621 [2024-11-28 09:02:10.669600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.621 [2024-11-28 09:02:10.669612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:16.621 [2024-11-28 09:02:10.669622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:18:16.621 [2024-11-28 09:02:10.669631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.621 [2024-11-28 09:02:10.679312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.621 [2024-11-28 09:02:10.679357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:16.621 [2024-11-28 09:02:10.679377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.653 ms 00:18:16.621 [2024-11-28 09:02:10.679393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.621 [2024-11-28 09:02:10.684258] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:16.621 [2024-11-28 09:02:10.684316] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:16.621 [2024-11-28 09:02:10.684331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.621 [2024-11-28 09:02:10.684340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:16.621 [2024-11-28 09:02:10.684350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.817 ms 00:18:16.621 [2024-11-28 09:02:10.684358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.621 [2024-11-28 09:02:10.700674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.621 [2024-11-28 09:02:10.700731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:16.621 [2024-11-28 09:02:10.700745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.257 ms 00:18:16.621 [2024-11-28 09:02:10.700758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.621 [2024-11-28 09:02:10.703713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.621 [2024-11-28 09:02:10.703759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:16.621 [2024-11-28 09:02:10.703770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.888 ms 00:18:16.621 [2024-11-28 09:02:10.703779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.621 [2024-11-28 09:02:10.706419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.621 [2024-11-28 09:02:10.706471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:16.621 [2024-11-28 09:02:10.706482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.576 ms 00:18:16.621 [2024-11-28 09:02:10.706490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.621 [2024-11-28 09:02:10.706898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.621 [2024-11-28 09:02:10.706924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:16.621 [2024-11-28 09:02:10.706937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:18:16.621 [2024-11-28 09:02:10.706947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.883 [2024-11-28 09:02:10.737718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.883 [2024-11-28 09:02:10.737780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:16.883 [2024-11-28 09:02:10.737811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.750 ms 00:18:16.883 [2024-11-28 09:02:10.737822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.883 [2024-11-28 09:02:10.746274] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:16.883 [2024-11-28 09:02:10.749596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.883 [2024-11-28 09:02:10.749639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:16.883 [2024-11-28 09:02:10.749658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.722 ms 00:18:16.883 [2024-11-28 09:02:10.749675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.883 [2024-11-28 09:02:10.749750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.883 [2024-11-28 09:02:10.749763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:16.883 [2024-11-28 09:02:10.749775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:16.883 [2024-11-28 09:02:10.749790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.883 [2024-11-28 09:02:10.749889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.883 [2024-11-28 09:02:10.749903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:16.883 [2024-11-28 09:02:10.749912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:16.883 [2024-11-28 09:02:10.749921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.883 [2024-11-28 09:02:10.749953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.883 [2024-11-28 09:02:10.749962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:16.883 [2024-11-28 09:02:10.749976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:16.883 [2024-11-28 09:02:10.749986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.883 [2024-11-28 09:02:10.750028] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:16.883 [2024-11-28 09:02:10.750040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.883 [2024-11-28 09:02:10.750050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:16.883 [2024-11-28 09:02:10.750059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:16.883 [2024-11-28 09:02:10.750068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.883 [2024-11-28 09:02:10.756468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.883 [2024-11-28 09:02:10.756526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:16.883 [2024-11-28 09:02:10.756539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.376 ms 00:18:16.883 [2024-11-28 09:02:10.756548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.883 [2024-11-28 09:02:10.756648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.883 [2024-11-28 09:02:10.756660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:16.883 [2024-11-28 09:02:10.756670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:16.883 [2024-11-28 09:02:10.756683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.883 [2024-11-28 09:02:10.758987] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 161.077 ms, result 0 00:18:17.825  [2024-11-28T09:02:12.877Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-28T09:02:13.818Z] Copying: 51/1024 [MB] (33 MBps) [2024-11-28T09:02:15.196Z] Copying: 66/1024 [MB] (15 MBps) [2024-11-28T09:02:16.139Z] Copying: 96/1024 [MB] (29 MBps) [2024-11-28T09:02:17.085Z] Copying: 112/1024 [MB] (15 MBps) [2024-11-28T09:02:18.023Z] Copying: 124/1024 [MB] (11 MBps) [2024-11-28T09:02:18.957Z] Copying: 139/1024 [MB] (15 MBps) [2024-11-28T09:02:19.897Z] Copying: 158/1024 [MB] (19 MBps) [2024-11-28T09:02:20.841Z] Copying: 179/1024 [MB] (20 MBps) [2024-11-28T09:02:21.785Z] Copying: 190/1024 [MB] (11 MBps) [2024-11-28T09:02:23.172Z] Copying: 200/1024 [MB] (10 MBps) [2024-11-28T09:02:24.107Z] Copying: 210/1024 [MB] (10 MBps) [2024-11-28T09:02:25.040Z] Copying: 225/1024 [MB] (15 MBps) [2024-11-28T09:02:25.978Z] Copying: 244/1024 [MB] (18 MBps) [2024-11-28T09:02:26.923Z] Copying: 263/1024 [MB] (19 MBps) [2024-11-28T09:02:27.867Z] Copying: 281/1024 [MB] (17 MBps) [2024-11-28T09:02:28.813Z] Copying: 295/1024 [MB] (14 MBps) [2024-11-28T09:02:29.770Z] Copying: 306/1024 [MB] (11 MBps) [2024-11-28T09:02:31.150Z] Copying: 324/1024 [MB] (18 MBps) [2024-11-28T09:02:32.093Z] Copying: 338/1024 [MB] (14 MBps) [2024-11-28T09:02:33.061Z] Copying: 363/1024 [MB] (24 MBps) [2024-11-28T09:02:34.044Z] Copying: 378/1024 [MB] (14 MBps) [2024-11-28T09:02:34.979Z] Copying: 388/1024 [MB] (10 MBps) [2024-11-28T09:02:35.913Z] Copying: 402/1024 [MB] (14 MBps) [2024-11-28T09:02:36.850Z] Copying: 425/1024 [MB] (22 MBps) [2024-11-28T09:02:37.793Z] Copying: 451/1024 [MB] (26 MBps) [2024-11-28T09:02:39.172Z] Copying: 467/1024 [MB] (16 MBps) [2024-11-28T09:02:40.105Z] Copying: 481/1024 [MB] (13 MBps) [2024-11-28T09:02:41.046Z] Copying: 510/1024 [MB] (29 MBps) [2024-11-28T09:02:41.984Z] Copying: 538/1024 [MB] (27 MBps) [2024-11-28T09:02:42.917Z] Copying: 557/1024 [MB] (18 MBps) [2024-11-28T09:02:43.854Z] Copying: 592/1024 [MB] (35 MBps) [2024-11-28T09:02:44.796Z] Copying: 610/1024 [MB] (18 MBps) [2024-11-28T09:02:46.179Z] Copying: 622/1024 [MB] (12 MBps) [2024-11-28T09:02:47.126Z] Copying: 638/1024 [MB] (15 MBps) [2024-11-28T09:02:48.071Z] Copying: 656/1024 [MB] (18 MBps) [2024-11-28T09:02:49.015Z] Copying: 672/1024 [MB] (16 MBps) [2024-11-28T09:02:49.959Z] Copying: 685/1024 [MB] (12 MBps) [2024-11-28T09:02:50.902Z] Copying: 696/1024 [MB] (11 MBps) [2024-11-28T09:02:51.842Z] Copying: 709/1024 [MB] (13 MBps) [2024-11-28T09:02:52.788Z] Copying: 725/1024 [MB] (15 MBps) [2024-11-28T09:02:54.176Z] Copying: 736/1024 [MB] (10 MBps) [2024-11-28T09:02:55.119Z] Copying: 746/1024 [MB] (10 MBps) [2024-11-28T09:02:56.064Z] Copying: 761/1024 [MB] (15 MBps) [2024-11-28T09:02:57.009Z] Copying: 774/1024 [MB] (12 MBps) [2024-11-28T09:02:57.954Z] Copying: 787/1024 [MB] (12 MBps) [2024-11-28T09:02:58.897Z] Copying: 805/1024 [MB] (18 MBps) [2024-11-28T09:02:59.842Z] Copying: 825/1024 [MB] (20 MBps) [2024-11-28T09:03:00.786Z] Copying: 838/1024 [MB] (13 MBps) [2024-11-28T09:03:02.174Z] Copying: 849/1024 [MB] (10 MBps) [2024-11-28T09:03:03.118Z] Copying: 863/1024 [MB] (14 MBps) [2024-11-28T09:03:04.063Z] Copying: 875/1024 [MB] (12 MBps) [2024-11-28T09:03:05.036Z] Copying: 885/1024 [MB] (10 MBps) [2024-11-28T09:03:06.018Z] Copying: 895/1024 [MB] (10 MBps) [2024-11-28T09:03:06.963Z] Copying: 906/1024 [MB] (11 MBps) [2024-11-28T09:03:07.907Z] Copying: 917/1024 [MB] (10 MBps) [2024-11-28T09:03:08.852Z] Copying: 929/1024 [MB] (11 MBps) [2024-11-28T09:03:09.799Z] Copying: 939/1024 [MB] (10 MBps) [2024-11-28T09:03:11.185Z] Copying: 949/1024 [MB] (10 MBps) [2024-11-28T09:03:12.125Z] Copying: 971/1024 [MB] (21 MBps) [2024-11-28T09:03:13.079Z] Copying: 1006/1024 [MB] (34 MBps) [2024-11-28T09:03:13.655Z] Copying: 1017/1024 [MB] (10 MBps) [2024-11-28T09:03:13.655Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-28 09:03:13.433786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.535 [2024-11-28 09:03:13.433871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:19.535 [2024-11-28 09:03:13.433890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:19.535 [2024-11-28 09:03:13.433899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.535 [2024-11-28 09:03:13.433923] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:19.535 [2024-11-28 09:03:13.434897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.535 [2024-11-28 09:03:13.434936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:19.535 [2024-11-28 09:03:13.434948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:19:19.535 [2024-11-28 09:03:13.434956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.535 [2024-11-28 09:03:13.437896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.535 [2024-11-28 09:03:13.437942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:19.535 [2024-11-28 09:03:13.437963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.913 ms 00:19:19.535 [2024-11-28 09:03:13.437972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.535 [2024-11-28 09:03:13.456458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.535 [2024-11-28 09:03:13.456516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:19.535 [2024-11-28 09:03:13.456530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.468 ms 00:19:19.535 [2024-11-28 09:03:13.456539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.535 [2024-11-28 09:03:13.462731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.535 [2024-11-28 09:03:13.462773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:19.535 [2024-11-28 09:03:13.462784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.152 ms 00:19:19.535 [2024-11-28 09:03:13.462792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.535 [2024-11-28 09:03:13.465878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.535 [2024-11-28 09:03:13.465926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:19.535 [2024-11-28 09:03:13.465936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.008 ms 00:19:19.535 [2024-11-28 09:03:13.465945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.535 [2024-11-28 09:03:13.471827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.535 [2024-11-28 09:03:13.471895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:19.535 [2024-11-28 09:03:13.471906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.840 ms 00:19:19.535 [2024-11-28 09:03:13.471915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.535 [2024-11-28 09:03:13.472045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.535 [2024-11-28 09:03:13.472063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:19.535 [2024-11-28 09:03:13.472074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:19.535 [2024-11-28 09:03:13.472082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.535 [2024-11-28 09:03:13.475348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.535 [2024-11-28 09:03:13.475396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:19.535 [2024-11-28 09:03:13.475407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.248 ms 00:19:19.535 [2024-11-28 09:03:13.475416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.535 [2024-11-28 09:03:13.478467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.535 [2024-11-28 09:03:13.478528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:19.535 [2024-11-28 09:03:13.478537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.008 ms 00:19:19.535 [2024-11-28 09:03:13.478544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.535 [2024-11-28 09:03:13.481068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.535 [2024-11-28 09:03:13.481115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:19.535 [2024-11-28 09:03:13.481125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.482 ms 00:19:19.535 [2024-11-28 09:03:13.481132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.535 [2024-11-28 09:03:13.483412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.535 [2024-11-28 09:03:13.483460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:19.535 [2024-11-28 09:03:13.483469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.211 ms 00:19:19.535 [2024-11-28 09:03:13.483476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.535 [2024-11-28 09:03:13.483512] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:19.535 [2024-11-28 09:03:13.483529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:19.536 [2024-11-28 09:03:13.483971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.483979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.483987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.483995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:19.537 [2024-11-28 09:03:13.484381] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:19.537 [2024-11-28 09:03:13.484391] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbd333f7-9e85-49f2-b58d-d1b0fe0ae743 00:19:19.538 [2024-11-28 09:03:13.484399] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:19.538 [2024-11-28 09:03:13.484407] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:19.538 [2024-11-28 09:03:13.484414] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:19.538 [2024-11-28 09:03:13.484422] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:19.538 [2024-11-28 09:03:13.484430] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:19.538 [2024-11-28 09:03:13.484439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:19.538 [2024-11-28 09:03:13.484447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:19.538 [2024-11-28 09:03:13.484453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:19.538 [2024-11-28 09:03:13.484460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:19.538 [2024-11-28 09:03:13.484467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.538 [2024-11-28 09:03:13.484477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:19.538 [2024-11-28 09:03:13.484487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:19:19.538 [2024-11-28 09:03:13.484502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.538 [2024-11-28 09:03:13.487605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.538 [2024-11-28 09:03:13.487659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:19.538 [2024-11-28 09:03:13.487672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.084 ms 00:19:19.538 [2024-11-28 09:03:13.487682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.538 [2024-11-28 09:03:13.487855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.538 [2024-11-28 09:03:13.487868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:19.538 [2024-11-28 09:03:13.487884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:19:19.538 [2024-11-28 09:03:13.487893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.538 [2024-11-28 09:03:13.496991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.538 [2024-11-28 09:03:13.497040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:19.538 [2024-11-28 09:03:13.497052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.538 [2024-11-28 09:03:13.497060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.538 [2024-11-28 09:03:13.497127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.538 [2024-11-28 09:03:13.497136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:19.538 [2024-11-28 09:03:13.497155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.538 [2024-11-28 09:03:13.497164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.538 [2024-11-28 09:03:13.497209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.538 [2024-11-28 09:03:13.497220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:19.538 [2024-11-28 09:03:13.497229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.538 [2024-11-28 09:03:13.497237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.538 [2024-11-28 09:03:13.497253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.538 [2024-11-28 09:03:13.497261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:19.538 [2024-11-28 09:03:13.497269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.538 [2024-11-28 09:03:13.497280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.538 [2024-11-28 09:03:13.516779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.538 [2024-11-28 09:03:13.516860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:19.538 [2024-11-28 09:03:13.516873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.538 [2024-11-28 09:03:13.516883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.538 [2024-11-28 09:03:13.532321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.538 [2024-11-28 09:03:13.532381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:19.538 [2024-11-28 09:03:13.532404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.538 [2024-11-28 09:03:13.532413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.538 [2024-11-28 09:03:13.532551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.538 [2024-11-28 09:03:13.532565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:19.538 [2024-11-28 09:03:13.532575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.538 [2024-11-28 09:03:13.532583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.538 [2024-11-28 09:03:13.532623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.538 [2024-11-28 09:03:13.532636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:19.538 [2024-11-28 09:03:13.532645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.538 [2024-11-28 09:03:13.532653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.538 [2024-11-28 09:03:13.532741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.538 [2024-11-28 09:03:13.532759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:19.538 [2024-11-28 09:03:13.532768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.538 [2024-11-28 09:03:13.532778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.538 [2024-11-28 09:03:13.532833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.538 [2024-11-28 09:03:13.532847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:19.538 [2024-11-28 09:03:13.532858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.538 [2024-11-28 09:03:13.532867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.539 [2024-11-28 09:03:13.532923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.539 [2024-11-28 09:03:13.532946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:19.539 [2024-11-28 09:03:13.532956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.539 [2024-11-28 09:03:13.532966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.539 [2024-11-28 09:03:13.533027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:19.539 [2024-11-28 09:03:13.533056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:19.539 [2024-11-28 09:03:13.533065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:19.539 [2024-11-28 09:03:13.533075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.539 [2024-11-28 09:03:13.533241] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 99.412 ms, result 0 00:19:20.112 00:19:20.112 00:19:20.112 09:03:14 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:20.112 [2024-11-28 09:03:14.091103] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:20.112 [2024-11-28 09:03:14.091253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87459 ] 00:19:20.373 [2024-11-28 09:03:14.241153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.373 [2024-11-28 09:03:14.312115] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.373 [2024-11-28 09:03:14.459115] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:20.373 [2024-11-28 09:03:14.459210] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:20.636 [2024-11-28 09:03:14.623123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.636 [2024-11-28 09:03:14.623183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:20.636 [2024-11-28 09:03:14.623204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:20.636 [2024-11-28 09:03:14.623213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.636 [2024-11-28 09:03:14.623277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.636 [2024-11-28 09:03:14.623290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.636 [2024-11-28 09:03:14.623304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:20.636 [2024-11-28 09:03:14.623316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.636 [2024-11-28 09:03:14.623348] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:20.636 [2024-11-28 09:03:14.623630] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:20.636 [2024-11-28 09:03:14.623670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.636 [2024-11-28 09:03:14.623680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.636 [2024-11-28 09:03:14.623693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:19:20.636 [2024-11-28 09:03:14.623705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.636 [2024-11-28 09:03:14.626102] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:20.636 [2024-11-28 09:03:14.630649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.636 [2024-11-28 09:03:14.630700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:20.636 [2024-11-28 09:03:14.630712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.549 ms 00:19:20.636 [2024-11-28 09:03:14.630720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.636 [2024-11-28 09:03:14.630820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.636 [2024-11-28 09:03:14.630831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:20.636 [2024-11-28 09:03:14.630844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:20.636 [2024-11-28 09:03:14.630854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.636 [2024-11-28 09:03:14.642198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.636 [2024-11-28 09:03:14.642239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.636 [2024-11-28 09:03:14.642251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.297 ms 00:19:20.636 [2024-11-28 09:03:14.642273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.636 [2024-11-28 09:03:14.642375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.636 [2024-11-28 09:03:14.642386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.636 [2024-11-28 09:03:14.642395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:20.636 [2024-11-28 09:03:14.642404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.636 [2024-11-28 09:03:14.642470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.636 [2024-11-28 09:03:14.642482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:20.636 [2024-11-28 09:03:14.642496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:20.636 [2024-11-28 09:03:14.642504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.636 [2024-11-28 09:03:14.642539] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:20.636 [2024-11-28 09:03:14.645187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.636 [2024-11-28 09:03:14.645228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.636 [2024-11-28 09:03:14.645238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.658 ms 00:19:20.636 [2024-11-28 09:03:14.645247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.636 [2024-11-28 09:03:14.645284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.636 [2024-11-28 09:03:14.645293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:20.636 [2024-11-28 09:03:14.645302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:20.636 [2024-11-28 09:03:14.645312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.636 [2024-11-28 09:03:14.645335] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:20.636 [2024-11-28 09:03:14.645368] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:20.636 [2024-11-28 09:03:14.645409] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:20.636 [2024-11-28 09:03:14.645427] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:20.636 [2024-11-28 09:03:14.645554] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:20.636 [2024-11-28 09:03:14.645565] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:20.636 [2024-11-28 09:03:14.645577] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:20.636 [2024-11-28 09:03:14.645590] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:20.636 [2024-11-28 09:03:14.645604] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:20.636 [2024-11-28 09:03:14.645613] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:20.636 [2024-11-28 09:03:14.645621] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:20.636 [2024-11-28 09:03:14.645628] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:20.636 [2024-11-28 09:03:14.645645] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:20.636 [2024-11-28 09:03:14.645654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.636 [2024-11-28 09:03:14.645662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:20.636 [2024-11-28 09:03:14.645671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:19:20.636 [2024-11-28 09:03:14.645679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.636 [2024-11-28 09:03:14.645766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.636 [2024-11-28 09:03:14.645787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:20.636 [2024-11-28 09:03:14.645815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:20.636 [2024-11-28 09:03:14.645823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.636 [2024-11-28 09:03:14.645933] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:20.637 [2024-11-28 09:03:14.645947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:20.637 [2024-11-28 09:03:14.645958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.637 [2024-11-28 09:03:14.645975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.637 [2024-11-28 09:03:14.645985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:20.637 [2024-11-28 09:03:14.645993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:20.637 [2024-11-28 09:03:14.646002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:20.637 [2024-11-28 09:03:14.646012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:20.637 [2024-11-28 09:03:14.646022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:20.637 [2024-11-28 09:03:14.646031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.637 [2024-11-28 09:03:14.646039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:20.637 [2024-11-28 09:03:14.646047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:20.637 [2024-11-28 09:03:14.646062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.637 [2024-11-28 09:03:14.646073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:20.637 [2024-11-28 09:03:14.646082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:20.637 [2024-11-28 09:03:14.646092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.637 [2024-11-28 09:03:14.646101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:20.637 [2024-11-28 09:03:14.646109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:20.637 [2024-11-28 09:03:14.646118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.637 [2024-11-28 09:03:14.646127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:20.637 [2024-11-28 09:03:14.646135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:20.637 [2024-11-28 09:03:14.646143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.637 [2024-11-28 09:03:14.646151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:20.637 [2024-11-28 09:03:14.646159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:20.637 [2024-11-28 09:03:14.646170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.637 [2024-11-28 09:03:14.646179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:20.637 [2024-11-28 09:03:14.646186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:20.637 [2024-11-28 09:03:14.646194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.637 [2024-11-28 09:03:14.646208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:20.637 [2024-11-28 09:03:14.646217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:20.637 [2024-11-28 09:03:14.646226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.637 [2024-11-28 09:03:14.646234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:20.637 [2024-11-28 09:03:14.646242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:20.637 [2024-11-28 09:03:14.646249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.637 [2024-11-28 09:03:14.646257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:20.637 [2024-11-28 09:03:14.646266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:20.637 [2024-11-28 09:03:14.646274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.637 [2024-11-28 09:03:14.646284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:20.637 [2024-11-28 09:03:14.646292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:20.637 [2024-11-28 09:03:14.646300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.637 [2024-11-28 09:03:14.646308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:20.637 [2024-11-28 09:03:14.646316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:20.637 [2024-11-28 09:03:14.646325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.637 [2024-11-28 09:03:14.646333] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:20.637 [2024-11-28 09:03:14.646352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:20.637 [2024-11-28 09:03:14.646364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.637 [2024-11-28 09:03:14.646377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.637 [2024-11-28 09:03:14.646390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:20.637 [2024-11-28 09:03:14.646398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:20.637 [2024-11-28 09:03:14.646405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:20.637 [2024-11-28 09:03:14.646413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:20.637 [2024-11-28 09:03:14.646420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:20.637 [2024-11-28 09:03:14.646427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:20.637 [2024-11-28 09:03:14.646437] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:20.637 [2024-11-28 09:03:14.646446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.637 [2024-11-28 09:03:14.646455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:20.637 [2024-11-28 09:03:14.646463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:20.637 [2024-11-28 09:03:14.646471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:20.637 [2024-11-28 09:03:14.646478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:20.637 [2024-11-28 09:03:14.646488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:20.637 [2024-11-28 09:03:14.646499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:20.637 [2024-11-28 09:03:14.646505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:20.637 [2024-11-28 09:03:14.646512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:20.637 [2024-11-28 09:03:14.646519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:20.637 [2024-11-28 09:03:14.646526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:20.637 [2024-11-28 09:03:14.646534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:20.637 [2024-11-28 09:03:14.646542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:20.637 [2024-11-28 09:03:14.646549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:20.637 [2024-11-28 09:03:14.646556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:20.637 [2024-11-28 09:03:14.646562] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:20.637 [2024-11-28 09:03:14.646571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.638 [2024-11-28 09:03:14.646578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:20.638 [2024-11-28 09:03:14.646588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:20.638 [2024-11-28 09:03:14.646595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:20.638 [2024-11-28 09:03:14.646602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:20.638 [2024-11-28 09:03:14.646611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.638 [2024-11-28 09:03:14.646621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:20.638 [2024-11-28 09:03:14.646628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:19:20.638 [2024-11-28 09:03:14.646637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.638 [2024-11-28 09:03:14.674666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.638 [2024-11-28 09:03:14.674730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.638 [2024-11-28 09:03:14.674756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.959 ms 00:19:20.638 [2024-11-28 09:03:14.674775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.638 [2024-11-28 09:03:14.674917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.638 [2024-11-28 09:03:14.674932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:20.638 [2024-11-28 09:03:14.674944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:20.638 [2024-11-28 09:03:14.674960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.638 [2024-11-28 09:03:14.690917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.638 [2024-11-28 09:03:14.690962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.638 [2024-11-28 09:03:14.690974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.877 ms 00:19:20.638 [2024-11-28 09:03:14.690983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.638 [2024-11-28 09:03:14.691020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.638 [2024-11-28 09:03:14.691029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.638 [2024-11-28 09:03:14.691038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:20.638 [2024-11-28 09:03:14.691047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.638 [2024-11-28 09:03:14.691753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.638 [2024-11-28 09:03:14.691816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.638 [2024-11-28 09:03:14.691829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:19:20.638 [2024-11-28 09:03:14.691839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.638 [2024-11-28 09:03:14.692002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.638 [2024-11-28 09:03:14.692014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.638 [2024-11-28 09:03:14.692023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:19:20.638 [2024-11-28 09:03:14.692036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.638 [2024-11-28 09:03:14.701473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.638 [2024-11-28 09:03:14.701516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.638 [2024-11-28 09:03:14.701534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.412 ms 00:19:20.638 [2024-11-28 09:03:14.701543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.638 [2024-11-28 09:03:14.706347] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:20.638 [2024-11-28 09:03:14.706401] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:20.638 [2024-11-28 09:03:14.706414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.638 [2024-11-28 09:03:14.706424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:20.638 [2024-11-28 09:03:14.706434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.757 ms 00:19:20.638 [2024-11-28 09:03:14.706442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.638 [2024-11-28 09:03:14.722613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.638 [2024-11-28 09:03:14.722688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:20.638 [2024-11-28 09:03:14.722704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.111 ms 00:19:20.638 [2024-11-28 09:03:14.722713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.638 [2024-11-28 09:03:14.725397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.638 [2024-11-28 09:03:14.725460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:20.638 [2024-11-28 09:03:14.725471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.631 ms 00:19:20.638 [2024-11-28 09:03:14.725479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.638 [2024-11-28 09:03:14.727664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.638 [2024-11-28 09:03:14.727709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:20.638 [2024-11-28 09:03:14.727721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.139 ms 00:19:20.638 [2024-11-28 09:03:14.727729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.638 [2024-11-28 09:03:14.728126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.638 [2024-11-28 09:03:14.728155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:20.638 [2024-11-28 09:03:14.728172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:19:20.638 [2024-11-28 09:03:14.728181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.901 [2024-11-28 09:03:14.757627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.901 [2024-11-28 09:03:14.757701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:20.901 [2024-11-28 09:03:14.757715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.420 ms 00:19:20.901 [2024-11-28 09:03:14.757725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.901 [2024-11-28 09:03:14.767266] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:20.901 [2024-11-28 09:03:14.771162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.901 [2024-11-28 09:03:14.771209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:20.901 [2024-11-28 09:03:14.771234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.384 ms 00:19:20.901 [2024-11-28 09:03:14.771246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.901 [2024-11-28 09:03:14.771324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.901 [2024-11-28 09:03:14.771336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:20.901 [2024-11-28 09:03:14.771349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:20.901 [2024-11-28 09:03:14.771359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.901 [2024-11-28 09:03:14.771442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.901 [2024-11-28 09:03:14.771454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:20.901 [2024-11-28 09:03:14.771464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:20.901 [2024-11-28 09:03:14.771477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.901 [2024-11-28 09:03:14.771500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.901 [2024-11-28 09:03:14.771509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:20.901 [2024-11-28 09:03:14.771519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:20.901 [2024-11-28 09:03:14.771528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.901 [2024-11-28 09:03:14.771576] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:20.901 [2024-11-28 09:03:14.771590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.901 [2024-11-28 09:03:14.771598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:20.901 [2024-11-28 09:03:14.771607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:20.901 [2024-11-28 09:03:14.771615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.901 [2024-11-28 09:03:14.777879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.901 [2024-11-28 09:03:14.777922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:20.901 [2024-11-28 09:03:14.777934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.241 ms 00:19:20.901 [2024-11-28 09:03:14.777943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.901 [2024-11-28 09:03:14.778037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.901 [2024-11-28 09:03:14.778048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:20.901 [2024-11-28 09:03:14.778058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:20.901 [2024-11-28 09:03:14.778070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.901 [2024-11-28 09:03:14.779611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 155.914 ms, result 0 00:19:21.846  [2024-11-28T09:03:17.352Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-28T09:03:18.297Z] Copying: 30/1024 [MB] (15 MBps) [2024-11-28T09:03:19.243Z] Copying: 42/1024 [MB] (12 MBps) [2024-11-28T09:03:20.189Z] Copying: 54/1024 [MB] (12 MBps) [2024-11-28T09:03:21.135Z] Copying: 67/1024 [MB] (12 MBps) [2024-11-28T09:03:22.076Z] Copying: 78/1024 [MB] (11 MBps) [2024-11-28T09:03:23.019Z] Copying: 93/1024 [MB] (14 MBps) [2024-11-28T09:03:24.405Z] Copying: 105/1024 [MB] (12 MBps) [2024-11-28T09:03:24.974Z] Copying: 119/1024 [MB] (13 MBps) [2024-11-28T09:03:26.359Z] Copying: 134/1024 [MB] (15 MBps) [2024-11-28T09:03:27.303Z] Copying: 145/1024 [MB] (10 MBps) [2024-11-28T09:03:28.242Z] Copying: 157/1024 [MB] (12 MBps) [2024-11-28T09:03:29.180Z] Copying: 172/1024 [MB] (14 MBps) [2024-11-28T09:03:30.125Z] Copying: 192/1024 [MB] (19 MBps) [2024-11-28T09:03:31.067Z] Copying: 204/1024 [MB] (12 MBps) [2024-11-28T09:03:32.013Z] Copying: 218/1024 [MB] (14 MBps) [2024-11-28T09:03:33.402Z] Copying: 235/1024 [MB] (16 MBps) [2024-11-28T09:03:33.977Z] Copying: 253/1024 [MB] (17 MBps) [2024-11-28T09:03:35.365Z] Copying: 272/1024 [MB] (19 MBps) [2024-11-28T09:03:36.362Z] Copying: 289/1024 [MB] (16 MBps) [2024-11-28T09:03:37.308Z] Copying: 309/1024 [MB] (19 MBps) [2024-11-28T09:03:38.248Z] Copying: 324/1024 [MB] (15 MBps) [2024-11-28T09:03:39.180Z] Copying: 335/1024 [MB] (11 MBps) [2024-11-28T09:03:40.120Z] Copying: 352/1024 [MB] (16 MBps) [2024-11-28T09:03:41.064Z] Copying: 368/1024 [MB] (15 MBps) [2024-11-28T09:03:42.002Z] Copying: 384/1024 [MB] (16 MBps) [2024-11-28T09:03:43.383Z] Copying: 402/1024 [MB] (17 MBps) [2024-11-28T09:03:44.326Z] Copying: 427/1024 [MB] (24 MBps) [2024-11-28T09:03:45.269Z] Copying: 449/1024 [MB] (22 MBps) [2024-11-28T09:03:46.213Z] Copying: 463/1024 [MB] (14 MBps) [2024-11-28T09:03:47.158Z] Copying: 482/1024 [MB] (18 MBps) [2024-11-28T09:03:48.095Z] Copying: 502/1024 [MB] (19 MBps) [2024-11-28T09:03:49.038Z] Copying: 521/1024 [MB] (19 MBps) [2024-11-28T09:03:49.981Z] Copying: 540/1024 [MB] (18 MBps) [2024-11-28T09:03:51.369Z] Copying: 562/1024 [MB] (21 MBps) [2024-11-28T09:03:52.314Z] Copying: 575/1024 [MB] (13 MBps) [2024-11-28T09:03:53.259Z] Copying: 586/1024 [MB] (10 MBps) [2024-11-28T09:03:54.205Z] Copying: 599/1024 [MB] (13 MBps) [2024-11-28T09:03:55.149Z] Copying: 612/1024 [MB] (13 MBps) [2024-11-28T09:03:56.093Z] Copying: 630/1024 [MB] (17 MBps) [2024-11-28T09:03:57.040Z] Copying: 640/1024 [MB] (10 MBps) [2024-11-28T09:03:57.983Z] Copying: 651/1024 [MB] (10 MBps) [2024-11-28T09:03:59.368Z] Copying: 663/1024 [MB] (12 MBps) [2024-11-28T09:04:00.308Z] Copying: 677/1024 [MB] (13 MBps) [2024-11-28T09:04:01.249Z] Copying: 698/1024 [MB] (20 MBps) [2024-11-28T09:04:02.192Z] Copying: 714/1024 [MB] (16 MBps) [2024-11-28T09:04:03.133Z] Copying: 727/1024 [MB] (13 MBps) [2024-11-28T09:04:04.071Z] Copying: 750/1024 [MB] (23 MBps) [2024-11-28T09:04:05.014Z] Copying: 764/1024 [MB] (13 MBps) [2024-11-28T09:04:06.396Z] Copying: 786/1024 [MB] (22 MBps) [2024-11-28T09:04:06.969Z] Copying: 808/1024 [MB] (21 MBps) [2024-11-28T09:04:08.440Z] Copying: 821/1024 [MB] (12 MBps) [2024-11-28T09:04:09.006Z] Copying: 834/1024 [MB] (13 MBps) [2024-11-28T09:04:10.385Z] Copying: 856/1024 [MB] (21 MBps) [2024-11-28T09:04:11.327Z] Copying: 871/1024 [MB] (15 MBps) [2024-11-28T09:04:12.269Z] Copying: 883/1024 [MB] (11 MBps) [2024-11-28T09:04:13.213Z] Copying: 895/1024 [MB] (12 MBps) [2024-11-28T09:04:14.156Z] Copying: 916/1024 [MB] (21 MBps) [2024-11-28T09:04:15.107Z] Copying: 937/1024 [MB] (20 MBps) [2024-11-28T09:04:16.056Z] Copying: 953/1024 [MB] (16 MBps) [2024-11-28T09:04:16.999Z] Copying: 964/1024 [MB] (10 MBps) [2024-11-28T09:04:18.387Z] Copying: 977/1024 [MB] (13 MBps) [2024-11-28T09:04:19.330Z] Copying: 989/1024 [MB] (12 MBps) [2024-11-28T09:04:19.900Z] Copying: 1003/1024 [MB] (13 MBps) [2024-11-28T09:04:20.160Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-28 09:04:20.139613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.040 [2024-11-28 09:04:20.139687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:26.040 [2024-11-28 09:04:20.139703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:26.040 [2024-11-28 09:04:20.139712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.040 [2024-11-28 09:04:20.139742] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:26.040 [2024-11-28 09:04:20.140315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.040 [2024-11-28 09:04:20.140342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:26.040 [2024-11-28 09:04:20.140352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:20:26.040 [2024-11-28 09:04:20.140360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.040 [2024-11-28 09:04:20.140589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.040 [2024-11-28 09:04:20.140600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:26.040 [2024-11-28 09:04:20.140609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:20:26.040 [2024-11-28 09:04:20.140617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.040 [2024-11-28 09:04:20.144672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.040 [2024-11-28 09:04:20.144703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:26.040 [2024-11-28 09:04:20.144712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.040 ms 00:20:26.040 [2024-11-28 09:04:20.144720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.040 [2024-11-28 09:04:20.150961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.040 [2024-11-28 09:04:20.151004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:26.040 [2024-11-28 09:04:20.151013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.224 ms 00:20:26.040 [2024-11-28 09:04:20.151021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.040 [2024-11-28 09:04:20.153301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.040 [2024-11-28 09:04:20.153336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:26.040 [2024-11-28 09:04:20.153345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.216 ms 00:20:26.040 [2024-11-28 09:04:20.153353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.040 [2024-11-28 09:04:20.157273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.040 [2024-11-28 09:04:20.157311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:26.040 [2024-11-28 09:04:20.157322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.890 ms 00:20:26.040 [2024-11-28 09:04:20.157340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.040 [2024-11-28 09:04:20.157453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.040 [2024-11-28 09:04:20.157463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:26.040 [2024-11-28 09:04:20.157476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:26.040 [2024-11-28 09:04:20.157483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.302 [2024-11-28 09:04:20.160232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.302 [2024-11-28 09:04:20.160268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:26.302 [2024-11-28 09:04:20.160278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.734 ms 00:20:26.302 [2024-11-28 09:04:20.160286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.302 [2024-11-28 09:04:20.162604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.302 [2024-11-28 09:04:20.162638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:26.302 [2024-11-28 09:04:20.162647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.277 ms 00:20:26.302 [2024-11-28 09:04:20.162653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.302 [2024-11-28 09:04:20.164504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.302 [2024-11-28 09:04:20.164533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:26.302 [2024-11-28 09:04:20.164542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.819 ms 00:20:26.302 [2024-11-28 09:04:20.164549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.302 [2024-11-28 09:04:20.166649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.302 [2024-11-28 09:04:20.166680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:26.302 [2024-11-28 09:04:20.166689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.048 ms 00:20:26.302 [2024-11-28 09:04:20.166696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.302 [2024-11-28 09:04:20.166723] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:26.302 [2024-11-28 09:04:20.166743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.166994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:26.302 [2024-11-28 09:04:20.167937] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:26.302 [2024-11-28 09:04:20.167951] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbd333f7-9e85-49f2-b58d-d1b0fe0ae743 00:20:26.302 [2024-11-28 09:04:20.167960] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:26.302 [2024-11-28 09:04:20.167968] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:26.302 [2024-11-28 09:04:20.167976] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:26.303 [2024-11-28 09:04:20.167983] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:26.303 [2024-11-28 09:04:20.167991] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:26.303 [2024-11-28 09:04:20.167998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:26.303 [2024-11-28 09:04:20.168007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:26.303 [2024-11-28 09:04:20.168013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:26.303 [2024-11-28 09:04:20.168020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:26.303 [2024-11-28 09:04:20.168027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.303 [2024-11-28 09:04:20.168034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:26.303 [2024-11-28 09:04:20.168048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:20:26.303 [2024-11-28 09:04:20.168056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.169860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.303 [2024-11-28 09:04:20.169888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:26.303 [2024-11-28 09:04:20.169899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.784 ms 00:20:26.303 [2024-11-28 09:04:20.169908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.169998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.303 [2024-11-28 09:04:20.170011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:26.303 [2024-11-28 09:04:20.170020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:26.303 [2024-11-28 09:04:20.170029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.175452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.303 [2024-11-28 09:04:20.175484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:26.303 [2024-11-28 09:04:20.175494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.303 [2024-11-28 09:04:20.175502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.175550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.303 [2024-11-28 09:04:20.175567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:26.303 [2024-11-28 09:04:20.175575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.303 [2024-11-28 09:04:20.175583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.175639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.303 [2024-11-28 09:04:20.175650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:26.303 [2024-11-28 09:04:20.175658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.303 [2024-11-28 09:04:20.175665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.175681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.303 [2024-11-28 09:04:20.175690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:26.303 [2024-11-28 09:04:20.175704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.303 [2024-11-28 09:04:20.175711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.187772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.303 [2024-11-28 09:04:20.187824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:26.303 [2024-11-28 09:04:20.187834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.303 [2024-11-28 09:04:20.187842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.196577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.303 [2024-11-28 09:04:20.196619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:26.303 [2024-11-28 09:04:20.196636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.303 [2024-11-28 09:04:20.196644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.196692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.303 [2024-11-28 09:04:20.196703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:26.303 [2024-11-28 09:04:20.196710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.303 [2024-11-28 09:04:20.196718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.196744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.303 [2024-11-28 09:04:20.196758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:26.303 [2024-11-28 09:04:20.196766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.303 [2024-11-28 09:04:20.196776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.196863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.303 [2024-11-28 09:04:20.196874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:26.303 [2024-11-28 09:04:20.196882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.303 [2024-11-28 09:04:20.196891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.196918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.303 [2024-11-28 09:04:20.196927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:26.303 [2024-11-28 09:04:20.196939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.303 [2024-11-28 09:04:20.196946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.196987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.303 [2024-11-28 09:04:20.196996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:26.303 [2024-11-28 09:04:20.197004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.303 [2024-11-28 09:04:20.197011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.197058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.303 [2024-11-28 09:04:20.197068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:26.303 [2024-11-28 09:04:20.197075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.303 [2024-11-28 09:04:20.197085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.303 [2024-11-28 09:04:20.197208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.568 ms, result 0 00:20:26.303 00:20:26.303 00:20:26.303 09:04:20 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:28.849 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:28.849 09:04:22 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:28.849 [2024-11-28 09:04:22.612174] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:28.849 [2024-11-28 09:04:22.612269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88170 ] 00:20:28.849 [2024-11-28 09:04:22.754116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.849 [2024-11-28 09:04:22.801190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.849 [2024-11-28 09:04:22.915003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:28.849 [2024-11-28 09:04:22.915076] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.110 [2024-11-28 09:04:23.078635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.110 [2024-11-28 09:04:23.078694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:29.110 [2024-11-28 09:04:23.078719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:29.110 [2024-11-28 09:04:23.078729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.110 [2024-11-28 09:04:23.078794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.110 [2024-11-28 09:04:23.078827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.110 [2024-11-28 09:04:23.078838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:29.110 [2024-11-28 09:04:23.078847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.110 [2024-11-28 09:04:23.078879] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:29.110 [2024-11-28 09:04:23.079187] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:29.110 [2024-11-28 09:04:23.079236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.110 [2024-11-28 09:04:23.079250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.110 [2024-11-28 09:04:23.079266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:20:29.110 [2024-11-28 09:04:23.079280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.110 [2024-11-28 09:04:23.081516] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:29.110 [2024-11-28 09:04:23.086178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.110 [2024-11-28 09:04:23.086229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:29.110 [2024-11-28 09:04:23.086241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.677 ms 00:20:29.110 [2024-11-28 09:04:23.086250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.110 [2024-11-28 09:04:23.086339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.111 [2024-11-28 09:04:23.086354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:29.111 [2024-11-28 09:04:23.086371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:29.111 [2024-11-28 09:04:23.086379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.111 [2024-11-28 09:04:23.097752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.111 [2024-11-28 09:04:23.097814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.111 [2024-11-28 09:04:23.097827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.322 ms 00:20:29.111 [2024-11-28 09:04:23.097840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.111 [2024-11-28 09:04:23.097941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.111 [2024-11-28 09:04:23.097952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.111 [2024-11-28 09:04:23.097962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:29.111 [2024-11-28 09:04:23.097971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.111 [2024-11-28 09:04:23.098036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.111 [2024-11-28 09:04:23.098050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:29.111 [2024-11-28 09:04:23.098061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:29.111 [2024-11-28 09:04:23.098070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.111 [2024-11-28 09:04:23.098098] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:29.111 [2024-11-28 09:04:23.100742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.111 [2024-11-28 09:04:23.100782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.111 [2024-11-28 09:04:23.100793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.654 ms 00:20:29.111 [2024-11-28 09:04:23.100818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.111 [2024-11-28 09:04:23.100854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.111 [2024-11-28 09:04:23.100863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:29.111 [2024-11-28 09:04:23.100872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:29.111 [2024-11-28 09:04:23.100881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.111 [2024-11-28 09:04:23.100903] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:29.111 [2024-11-28 09:04:23.100938] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:29.111 [2024-11-28 09:04:23.100981] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:29.111 [2024-11-28 09:04:23.101006] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:29.111 [2024-11-28 09:04:23.101120] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:29.111 [2024-11-28 09:04:23.101133] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:29.111 [2024-11-28 09:04:23.101144] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:29.111 [2024-11-28 09:04:23.101155] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:29.111 [2024-11-28 09:04:23.101171] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:29.111 [2024-11-28 09:04:23.101181] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:29.111 [2024-11-28 09:04:23.101190] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:29.111 [2024-11-28 09:04:23.101201] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:29.111 [2024-11-28 09:04:23.101210] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:29.111 [2024-11-28 09:04:23.101220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.111 [2024-11-28 09:04:23.101229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:29.111 [2024-11-28 09:04:23.101239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:20:29.111 [2024-11-28 09:04:23.101246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.111 [2024-11-28 09:04:23.101334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.111 [2024-11-28 09:04:23.101347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:29.111 [2024-11-28 09:04:23.101357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:29.111 [2024-11-28 09:04:23.101365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.111 [2024-11-28 09:04:23.101467] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:29.111 [2024-11-28 09:04:23.101490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:29.111 [2024-11-28 09:04:23.101500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.111 [2024-11-28 09:04:23.101537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.111 [2024-11-28 09:04:23.101548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:29.111 [2024-11-28 09:04:23.101559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:29.111 [2024-11-28 09:04:23.101568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:29.111 [2024-11-28 09:04:23.101578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:29.111 [2024-11-28 09:04:23.101587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:29.111 [2024-11-28 09:04:23.101598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.111 [2024-11-28 09:04:23.101609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:29.111 [2024-11-28 09:04:23.101617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:29.111 [2024-11-28 09:04:23.101629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.111 [2024-11-28 09:04:23.101638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:29.111 [2024-11-28 09:04:23.101647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:29.111 [2024-11-28 09:04:23.101662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.111 [2024-11-28 09:04:23.101672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:29.111 [2024-11-28 09:04:23.101681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:29.111 [2024-11-28 09:04:23.101689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.111 [2024-11-28 09:04:23.101698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:29.111 [2024-11-28 09:04:23.101707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:29.111 [2024-11-28 09:04:23.101716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.111 [2024-11-28 09:04:23.101724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:29.111 [2024-11-28 09:04:23.101732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:29.111 [2024-11-28 09:04:23.101740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.111 [2024-11-28 09:04:23.101749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:29.111 [2024-11-28 09:04:23.101759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:29.111 [2024-11-28 09:04:23.101768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.111 [2024-11-28 09:04:23.101780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:29.111 [2024-11-28 09:04:23.101788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:29.111 [2024-11-28 09:04:23.101795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.111 [2024-11-28 09:04:23.101819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:29.111 [2024-11-28 09:04:23.101827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:29.111 [2024-11-28 09:04:23.101834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.111 [2024-11-28 09:04:23.101842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:29.111 [2024-11-28 09:04:23.101849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:29.111 [2024-11-28 09:04:23.101857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.111 [2024-11-28 09:04:23.101865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:29.111 [2024-11-28 09:04:23.101873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:29.111 [2024-11-28 09:04:23.101880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.111 [2024-11-28 09:04:23.101887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:29.111 [2024-11-28 09:04:23.101896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:29.111 [2024-11-28 09:04:23.101905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.111 [2024-11-28 09:04:23.101913] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:29.111 [2024-11-28 09:04:23.101924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:29.111 [2024-11-28 09:04:23.101937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.111 [2024-11-28 09:04:23.101950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.111 [2024-11-28 09:04:23.101961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:29.111 [2024-11-28 09:04:23.101970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:29.111 [2024-11-28 09:04:23.101977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:29.111 [2024-11-28 09:04:23.101985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:29.111 [2024-11-28 09:04:23.101995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:29.111 [2024-11-28 09:04:23.102003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:29.111 [2024-11-28 09:04:23.102013] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:29.111 [2024-11-28 09:04:23.102023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.111 [2024-11-28 09:04:23.102032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:29.111 [2024-11-28 09:04:23.102042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:29.112 [2024-11-28 09:04:23.102050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:29.112 [2024-11-28 09:04:23.102058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:29.112 [2024-11-28 09:04:23.102066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:29.112 [2024-11-28 09:04:23.102078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:29.112 [2024-11-28 09:04:23.102086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:29.112 [2024-11-28 09:04:23.102093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:29.112 [2024-11-28 09:04:23.102101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:29.112 [2024-11-28 09:04:23.102108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:29.112 [2024-11-28 09:04:23.102116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:29.112 [2024-11-28 09:04:23.102124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:29.112 [2024-11-28 09:04:23.102131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:29.112 [2024-11-28 09:04:23.102139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:29.112 [2024-11-28 09:04:23.102147] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:29.112 [2024-11-28 09:04:23.102156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.112 [2024-11-28 09:04:23.102166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:29.112 [2024-11-28 09:04:23.102175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:29.112 [2024-11-28 09:04:23.102183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:29.112 [2024-11-28 09:04:23.102191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:29.112 [2024-11-28 09:04:23.102200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.102212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:29.112 [2024-11-28 09:04:23.102223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:20:29.112 [2024-11-28 09:04:23.102234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.130679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.130738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.112 [2024-11-28 09:04:23.130761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.379 ms 00:20:29.112 [2024-11-28 09:04:23.130772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.130893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.130906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:29.112 [2024-11-28 09:04:23.130918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:29.112 [2024-11-28 09:04:23.130935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.146757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.146823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.112 [2024-11-28 09:04:23.146835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.748 ms 00:20:29.112 [2024-11-28 09:04:23.146844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.146885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.146894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.112 [2024-11-28 09:04:23.146903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:29.112 [2024-11-28 09:04:23.146911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.147612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.147661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.112 [2024-11-28 09:04:23.147679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:20:29.112 [2024-11-28 09:04:23.147689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.147873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.147886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.112 [2024-11-28 09:04:23.147895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:20:29.112 [2024-11-28 09:04:23.147903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.157329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.157374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.112 [2024-11-28 09:04:23.157392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.399 ms 00:20:29.112 [2024-11-28 09:04:23.157402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.162142] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:29.112 [2024-11-28 09:04:23.162195] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:29.112 [2024-11-28 09:04:23.162209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.162219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:29.112 [2024-11-28 09:04:23.162229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.660 ms 00:20:29.112 [2024-11-28 09:04:23.162237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.178420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.178474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:29.112 [2024-11-28 09:04:23.178491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.130 ms 00:20:29.112 [2024-11-28 09:04:23.178500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.181157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.181200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:29.112 [2024-11-28 09:04:23.181211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.604 ms 00:20:29.112 [2024-11-28 09:04:23.181220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.183909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.183953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:29.112 [2024-11-28 09:04:23.183964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:20:29.112 [2024-11-28 09:04:23.183972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.184325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.184351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:29.112 [2024-11-28 09:04:23.184362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:20:29.112 [2024-11-28 09:04:23.184371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.213676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.213753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:29.112 [2024-11-28 09:04:23.213768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.280 ms 00:20:29.112 [2024-11-28 09:04:23.213778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.222456] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:29.112 [2024-11-28 09:04:23.226117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.226172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:29.112 [2024-11-28 09:04:23.226204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.232 ms 00:20:29.112 [2024-11-28 09:04:23.226222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.226343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.226364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:29.112 [2024-11-28 09:04:23.226380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:29.112 [2024-11-28 09:04:23.226394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.226514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.226532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:29.112 [2024-11-28 09:04:23.226547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:29.112 [2024-11-28 09:04:23.226572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.226608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.226623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:29.112 [2024-11-28 09:04:23.226634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:29.112 [2024-11-28 09:04:23.226653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.112 [2024-11-28 09:04:23.226702] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:29.112 [2024-11-28 09:04:23.226730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.112 [2024-11-28 09:04:23.226739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:29.112 [2024-11-28 09:04:23.226749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:29.112 [2024-11-28 09:04:23.226757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.373 [2024-11-28 09:04:23.233087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.373 [2024-11-28 09:04:23.233140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:29.373 [2024-11-28 09:04:23.233153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.305 ms 00:20:29.373 [2024-11-28 09:04:23.233162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.373 [2024-11-28 09:04:23.233256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.373 [2024-11-28 09:04:23.233269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:29.373 [2024-11-28 09:04:23.233278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:29.373 [2024-11-28 09:04:23.233292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.373 [2024-11-28 09:04:23.235049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 155.834 ms, result 0 00:20:30.315  [2024-11-28T09:04:25.377Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-28T09:04:26.318Z] Copying: 34/1024 [MB] (16 MBps) [2024-11-28T09:04:27.261Z] Copying: 53/1024 [MB] (19 MBps) [2024-11-28T09:04:28.646Z] Copying: 69/1024 [MB] (16 MBps) [2024-11-28T09:04:29.592Z] Copying: 86/1024 [MB] (16 MBps) [2024-11-28T09:04:30.536Z] Copying: 103/1024 [MB] (16 MBps) [2024-11-28T09:04:31.478Z] Copying: 117/1024 [MB] (14 MBps) [2024-11-28T09:04:32.415Z] Copying: 129952/1048576 [kB] (9960 kBps) [2024-11-28T09:04:33.348Z] Copying: 139/1024 [MB] (12 MBps) [2024-11-28T09:04:34.282Z] Copying: 157/1024 [MB] (17 MBps) [2024-11-28T09:04:35.668Z] Copying: 180/1024 [MB] (23 MBps) [2024-11-28T09:04:36.612Z] Copying: 192/1024 [MB] (11 MBps) [2024-11-28T09:04:37.556Z] Copying: 207/1024 [MB] (15 MBps) [2024-11-28T09:04:38.490Z] Copying: 217/1024 [MB] (10 MBps) [2024-11-28T09:04:39.424Z] Copying: 237/1024 [MB] (19 MBps) [2024-11-28T09:04:40.442Z] Copying: 255/1024 [MB] (18 MBps) [2024-11-28T09:04:41.374Z] Copying: 274/1024 [MB] (18 MBps) [2024-11-28T09:04:42.306Z] Copying: 291/1024 [MB] (17 MBps) [2024-11-28T09:04:43.685Z] Copying: 316/1024 [MB] (25 MBps) [2024-11-28T09:04:44.256Z] Copying: 339/1024 [MB] (22 MBps) [2024-11-28T09:04:45.643Z] Copying: 353/1024 [MB] (14 MBps) [2024-11-28T09:04:46.587Z] Copying: 369/1024 [MB] (16 MBps) [2024-11-28T09:04:47.530Z] Copying: 387/1024 [MB] (17 MBps) [2024-11-28T09:04:48.474Z] Copying: 401/1024 [MB] (13 MBps) [2024-11-28T09:04:49.411Z] Copying: 418/1024 [MB] (16 MBps) [2024-11-28T09:04:50.345Z] Copying: 435/1024 [MB] (17 MBps) [2024-11-28T09:04:51.288Z] Copying: 469/1024 [MB] (33 MBps) [2024-11-28T09:04:52.672Z] Copying: 485/1024 [MB] (15 MBps) [2024-11-28T09:04:53.246Z] Copying: 502/1024 [MB] (16 MBps) [2024-11-28T09:04:54.633Z] Copying: 516/1024 [MB] (13 MBps) [2024-11-28T09:04:55.576Z] Copying: 542/1024 [MB] (26 MBps) [2024-11-28T09:04:56.521Z] Copying: 569/1024 [MB] (26 MBps) [2024-11-28T09:04:57.467Z] Copying: 597/1024 [MB] (28 MBps) [2024-11-28T09:04:58.409Z] Copying: 618/1024 [MB] (20 MBps) [2024-11-28T09:04:59.354Z] Copying: 634/1024 [MB] (16 MBps) [2024-11-28T09:05:00.299Z] Copying: 649/1024 [MB] (15 MBps) [2024-11-28T09:05:01.688Z] Copying: 668/1024 [MB] (19 MBps) [2024-11-28T09:05:02.262Z] Copying: 685/1024 [MB] (16 MBps) [2024-11-28T09:05:03.661Z] Copying: 698/1024 [MB] (13 MBps) [2024-11-28T09:05:04.607Z] Copying: 712/1024 [MB] (14 MBps) [2024-11-28T09:05:05.552Z] Copying: 728/1024 [MB] (15 MBps) [2024-11-28T09:05:06.495Z] Copying: 738/1024 [MB] (10 MBps) [2024-11-28T09:05:07.440Z] Copying: 750/1024 [MB] (11 MBps) [2024-11-28T09:05:08.465Z] Copying: 762/1024 [MB] (12 MBps) [2024-11-28T09:05:09.402Z] Copying: 773/1024 [MB] (10 MBps) [2024-11-28T09:05:10.345Z] Copying: 799/1024 [MB] (26 MBps) [2024-11-28T09:05:11.290Z] Copying: 815/1024 [MB] (15 MBps) [2024-11-28T09:05:12.671Z] Copying: 825/1024 [MB] (10 MBps) [2024-11-28T09:05:13.604Z] Copying: 841/1024 [MB] (15 MBps) [2024-11-28T09:05:14.535Z] Copying: 873/1024 [MB] (32 MBps) [2024-11-28T09:05:15.468Z] Copying: 905/1024 [MB] (31 MBps) [2024-11-28T09:05:16.403Z] Copying: 936/1024 [MB] (30 MBps) [2024-11-28T09:05:17.349Z] Copying: 964/1024 [MB] (28 MBps) [2024-11-28T09:05:18.293Z] Copying: 982/1024 [MB] (18 MBps) [2024-11-28T09:05:19.681Z] Copying: 998/1024 [MB] (15 MBps) [2024-11-28T09:05:20.254Z] Copying: 1012/1024 [MB] (14 MBps) [2024-11-28T09:05:21.204Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-28T09:05:21.204Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-28 09:05:21.074371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.084 [2024-11-28 09:05:21.074656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:27.084 [2024-11-28 09:05:21.074688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:27.084 [2024-11-28 09:05:21.074700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.084 [2024-11-28 09:05:21.078257] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:27.084 [2024-11-28 09:05:21.080911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.084 [2024-11-28 09:05:21.080958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:27.084 [2024-11-28 09:05:21.080972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.593 ms 00:21:27.084 [2024-11-28 09:05:21.080981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.084 [2024-11-28 09:05:21.093400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.084 [2024-11-28 09:05:21.093469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:27.084 [2024-11-28 09:05:21.093484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.193 ms 00:21:27.084 [2024-11-28 09:05:21.093493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.084 [2024-11-28 09:05:21.119052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.084 [2024-11-28 09:05:21.119105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:27.084 [2024-11-28 09:05:21.119117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.538 ms 00:21:27.084 [2024-11-28 09:05:21.119126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.084 [2024-11-28 09:05:21.125295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.084 [2024-11-28 09:05:21.125346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:27.084 [2024-11-28 09:05:21.125359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.131 ms 00:21:27.084 [2024-11-28 09:05:21.125368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.084 [2024-11-28 09:05:21.128537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.084 [2024-11-28 09:05:21.128587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:27.084 [2024-11-28 09:05:21.128600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.118 ms 00:21:27.084 [2024-11-28 09:05:21.128609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.084 [2024-11-28 09:05:21.134171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.084 [2024-11-28 09:05:21.134222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:27.084 [2024-11-28 09:05:21.134234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.516 ms 00:21:27.084 [2024-11-28 09:05:21.134243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.347 [2024-11-28 09:05:21.424626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.347 [2024-11-28 09:05:21.424689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:27.347 [2024-11-28 09:05:21.424702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 290.319 ms 00:21:27.347 [2024-11-28 09:05:21.424712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.347 [2024-11-28 09:05:21.428216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.347 [2024-11-28 09:05:21.428264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:27.347 [2024-11-28 09:05:21.428276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.485 ms 00:21:27.347 [2024-11-28 09:05:21.428285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.347 [2024-11-28 09:05:21.431279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.347 [2024-11-28 09:05:21.431327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:27.347 [2024-11-28 09:05:21.431337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.935 ms 00:21:27.347 [2024-11-28 09:05:21.431345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.347 [2024-11-28 09:05:21.433312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.347 [2024-11-28 09:05:21.433359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:27.347 [2024-11-28 09:05:21.433370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.926 ms 00:21:27.347 [2024-11-28 09:05:21.433378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.348 [2024-11-28 09:05:21.435294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.348 [2024-11-28 09:05:21.435342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:27.348 [2024-11-28 09:05:21.435352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.845 ms 00:21:27.348 [2024-11-28 09:05:21.435360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.348 [2024-11-28 09:05:21.435399] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:27.348 [2024-11-28 09:05:21.435416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 101632 / 261120 wr_cnt: 1 state: open 00:21:27.348 [2024-11-28 09:05:21.435428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.435994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:27.348 [2024-11-28 09:05:21.436160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:27.349 [2024-11-28 09:05:21.436323] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:27.349 [2024-11-28 09:05:21.436333] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbd333f7-9e85-49f2-b58d-d1b0fe0ae743 00:21:27.349 [2024-11-28 09:05:21.436342] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 101632 00:21:27.349 [2024-11-28 09:05:21.436352] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 102592 00:21:27.349 [2024-11-28 09:05:21.436362] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 101632 00:21:27.349 [2024-11-28 09:05:21.436380] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:21:27.349 [2024-11-28 09:05:21.436389] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:27.349 [2024-11-28 09:05:21.436398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:27.349 [2024-11-28 09:05:21.436407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:27.349 [2024-11-28 09:05:21.436415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:27.349 [2024-11-28 09:05:21.436422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:27.349 [2024-11-28 09:05:21.436431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.349 [2024-11-28 09:05:21.436440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:27.349 [2024-11-28 09:05:21.436449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:21:27.349 [2024-11-28 09:05:21.436457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.349 [2024-11-28 09:05:21.439599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.349 [2024-11-28 09:05:21.439641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:27.349 [2024-11-28 09:05:21.439652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.115 ms 00:21:27.349 [2024-11-28 09:05:21.439661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.349 [2024-11-28 09:05:21.439842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.349 [2024-11-28 09:05:21.439856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:27.349 [2024-11-28 09:05:21.439866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:21:27.349 [2024-11-28 09:05:21.439881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.349 [2024-11-28 09:05:21.449066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.349 [2024-11-28 09:05:21.449117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:27.349 [2024-11-28 09:05:21.449129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.349 [2024-11-28 09:05:21.449139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.349 [2024-11-28 09:05:21.449200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.349 [2024-11-28 09:05:21.449211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:27.349 [2024-11-28 09:05:21.449220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.349 [2024-11-28 09:05:21.449229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.349 [2024-11-28 09:05:21.449278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.349 [2024-11-28 09:05:21.449300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:27.349 [2024-11-28 09:05:21.449309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.349 [2024-11-28 09:05:21.449319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.349 [2024-11-28 09:05:21.449336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.349 [2024-11-28 09:05:21.449345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:27.349 [2024-11-28 09:05:21.449357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.349 [2024-11-28 09:05:21.449366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.611 [2024-11-28 09:05:21.468577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.611 [2024-11-28 09:05:21.468640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:27.611 [2024-11-28 09:05:21.468652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.611 [2024-11-28 09:05:21.468661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.611 [2024-11-28 09:05:21.484101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.611 [2024-11-28 09:05:21.484162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:27.611 [2024-11-28 09:05:21.484176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.611 [2024-11-28 09:05:21.484186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.611 [2024-11-28 09:05:21.484262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.611 [2024-11-28 09:05:21.484274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:27.611 [2024-11-28 09:05:21.484292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.611 [2024-11-28 09:05:21.484302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.611 [2024-11-28 09:05:21.484353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.611 [2024-11-28 09:05:21.484365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:27.611 [2024-11-28 09:05:21.484375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.611 [2024-11-28 09:05:21.484385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.611 [2024-11-28 09:05:21.484484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.611 [2024-11-28 09:05:21.484498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:27.611 [2024-11-28 09:05:21.484513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.611 [2024-11-28 09:05:21.484523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.611 [2024-11-28 09:05:21.484558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.611 [2024-11-28 09:05:21.484573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:27.611 [2024-11-28 09:05:21.484585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.611 [2024-11-28 09:05:21.484594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.611 [2024-11-28 09:05:21.484645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.611 [2024-11-28 09:05:21.484658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:27.611 [2024-11-28 09:05:21.484669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.611 [2024-11-28 09:05:21.484681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.611 [2024-11-28 09:05:21.484742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.611 [2024-11-28 09:05:21.484823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:27.611 [2024-11-28 09:05:21.484834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.611 [2024-11-28 09:05:21.484845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.611 [2024-11-28 09:05:21.485022] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 411.966 ms, result 0 00:21:28.555 00:21:28.555 00:21:28.555 09:05:22 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:28.555 [2024-11-28 09:05:22.439430] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:28.555 [2024-11-28 09:05:22.439583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88786 ] 00:21:28.555 [2024-11-28 09:05:22.594210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.555 [2024-11-28 09:05:22.664971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.816 [2024-11-28 09:05:22.813030] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:28.816 [2024-11-28 09:05:22.813119] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:29.079 [2024-11-28 09:05:22.977220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.079 [2024-11-28 09:05:22.977280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:29.079 [2024-11-28 09:05:22.977301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:29.079 [2024-11-28 09:05:22.977312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.079 [2024-11-28 09:05:22.977373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.079 [2024-11-28 09:05:22.977385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:29.079 [2024-11-28 09:05:22.977395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:29.079 [2024-11-28 09:05:22.977403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.079 [2024-11-28 09:05:22.977429] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:29.079 [2024-11-28 09:05:22.977883] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:29.079 [2024-11-28 09:05:22.977927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.079 [2024-11-28 09:05:22.977936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:29.079 [2024-11-28 09:05:22.977956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:21:29.079 [2024-11-28 09:05:22.977968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.079 [2024-11-28 09:05:22.980210] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:29.079 [2024-11-28 09:05:22.984903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.079 [2024-11-28 09:05:22.984951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:29.079 [2024-11-28 09:05:22.984970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.695 ms 00:21:29.079 [2024-11-28 09:05:22.984979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.079 [2024-11-28 09:05:22.985062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.079 [2024-11-28 09:05:22.985073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:29.079 [2024-11-28 09:05:22.985088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:29.079 [2024-11-28 09:05:22.985101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.079 [2024-11-28 09:05:22.996471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.079 [2024-11-28 09:05:22.996515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:29.079 [2024-11-28 09:05:22.996538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.316 ms 00:21:29.079 [2024-11-28 09:05:22.996550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.079 [2024-11-28 09:05:22.996658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.079 [2024-11-28 09:05:22.996670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:29.079 [2024-11-28 09:05:22.996678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:21:29.079 [2024-11-28 09:05:22.996690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.079 [2024-11-28 09:05:22.996759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.079 [2024-11-28 09:05:22.996771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:29.079 [2024-11-28 09:05:22.996785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:29.079 [2024-11-28 09:05:22.996794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.079 [2024-11-28 09:05:22.996845] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:29.079 [2024-11-28 09:05:22.999492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.079 [2024-11-28 09:05:22.999531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:29.079 [2024-11-28 09:05:22.999542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.658 ms 00:21:29.079 [2024-11-28 09:05:22.999550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.079 [2024-11-28 09:05:22.999602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.079 [2024-11-28 09:05:22.999616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:29.079 [2024-11-28 09:05:22.999626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:29.079 [2024-11-28 09:05:22.999639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.079 [2024-11-28 09:05:22.999668] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:29.079 [2024-11-28 09:05:22.999694] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:29.079 [2024-11-28 09:05:22.999738] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:29.079 [2024-11-28 09:05:22.999757] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:29.079 [2024-11-28 09:05:22.999890] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:29.079 [2024-11-28 09:05:22.999905] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:29.079 [2024-11-28 09:05:22.999916] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:29.079 [2024-11-28 09:05:22.999928] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:29.079 [2024-11-28 09:05:22.999943] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:29.079 [2024-11-28 09:05:22.999953] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:29.079 [2024-11-28 09:05:22.999961] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:29.079 [2024-11-28 09:05:22.999973] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:29.079 [2024-11-28 09:05:22.999988] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:29.079 [2024-11-28 09:05:23.000001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.079 [2024-11-28 09:05:23.000011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:29.079 [2024-11-28 09:05:23.000019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:21:29.079 [2024-11-28 09:05:23.000033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.079 [2024-11-28 09:05:23.000118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.079 [2024-11-28 09:05:23.000133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:29.079 [2024-11-28 09:05:23.000143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:29.079 [2024-11-28 09:05:23.000153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.079 [2024-11-28 09:05:23.000254] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:29.079 [2024-11-28 09:05:23.000274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:29.079 [2024-11-28 09:05:23.000285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:29.079 [2024-11-28 09:05:23.000302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.080 [2024-11-28 09:05:23.000312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:29.080 [2024-11-28 09:05:23.000322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:29.080 [2024-11-28 09:05:23.000330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:29.080 [2024-11-28 09:05:23.000347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:29.080 [2024-11-28 09:05:23.000356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:29.080 [2024-11-28 09:05:23.000364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:29.080 [2024-11-28 09:05:23.000373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:29.080 [2024-11-28 09:05:23.000381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:29.080 [2024-11-28 09:05:23.000389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:29.080 [2024-11-28 09:05:23.000398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:29.080 [2024-11-28 09:05:23.000407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:29.080 [2024-11-28 09:05:23.000416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.080 [2024-11-28 09:05:23.000424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:29.080 [2024-11-28 09:05:23.000433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:29.080 [2024-11-28 09:05:23.000441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.080 [2024-11-28 09:05:23.000450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:29.080 [2024-11-28 09:05:23.000462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:29.080 [2024-11-28 09:05:23.000471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:29.080 [2024-11-28 09:05:23.000481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:29.080 [2024-11-28 09:05:23.000492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:29.080 [2024-11-28 09:05:23.000502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:29.080 [2024-11-28 09:05:23.000511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:29.080 [2024-11-28 09:05:23.000520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:29.080 [2024-11-28 09:05:23.000529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:29.080 [2024-11-28 09:05:23.000536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:29.080 [2024-11-28 09:05:23.000543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:29.080 [2024-11-28 09:05:23.000551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:29.080 [2024-11-28 09:05:23.000558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:29.080 [2024-11-28 09:05:23.000565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:29.080 [2024-11-28 09:05:23.000572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:29.080 [2024-11-28 09:05:23.000578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:29.080 [2024-11-28 09:05:23.000585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:29.080 [2024-11-28 09:05:23.000592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:29.080 [2024-11-28 09:05:23.000599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:29.080 [2024-11-28 09:05:23.000606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:29.080 [2024-11-28 09:05:23.000616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.080 [2024-11-28 09:05:23.000622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:29.080 [2024-11-28 09:05:23.000629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:29.080 [2024-11-28 09:05:23.000636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.080 [2024-11-28 09:05:23.000643] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:29.080 [2024-11-28 09:05:23.000651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:29.080 [2024-11-28 09:05:23.000658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:29.080 [2024-11-28 09:05:23.000670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.080 [2024-11-28 09:05:23.000677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:29.080 [2024-11-28 09:05:23.000684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:29.080 [2024-11-28 09:05:23.000691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:29.080 [2024-11-28 09:05:23.000697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:29.080 [2024-11-28 09:05:23.000704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:29.080 [2024-11-28 09:05:23.000714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:29.080 [2024-11-28 09:05:23.000724] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:29.080 [2024-11-28 09:05:23.000733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:29.080 [2024-11-28 09:05:23.000745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:29.080 [2024-11-28 09:05:23.000754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:29.080 [2024-11-28 09:05:23.000762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:29.080 [2024-11-28 09:05:23.000771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:29.080 [2024-11-28 09:05:23.000779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:29.080 [2024-11-28 09:05:23.000789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:29.080 [2024-11-28 09:05:23.000814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:29.080 [2024-11-28 09:05:23.000823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:29.080 [2024-11-28 09:05:23.000830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:29.080 [2024-11-28 09:05:23.000837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:29.080 [2024-11-28 09:05:23.000845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:29.080 [2024-11-28 09:05:23.000854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:29.080 [2024-11-28 09:05:23.000862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:29.080 [2024-11-28 09:05:23.000870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:29.080 [2024-11-28 09:05:23.000880] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:29.080 [2024-11-28 09:05:23.000893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:29.080 [2024-11-28 09:05:23.000904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:29.080 [2024-11-28 09:05:23.000913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:29.080 [2024-11-28 09:05:23.000921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:29.080 [2024-11-28 09:05:23.000929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:29.080 [2024-11-28 09:05:23.000938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.080 [2024-11-28 09:05:23.000946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:29.080 [2024-11-28 09:05:23.000959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:21:29.080 [2024-11-28 09:05:23.000970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.080 [2024-11-28 09:05:23.031742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.080 [2024-11-28 09:05:23.031826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:29.080 [2024-11-28 09:05:23.031844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.705 ms 00:21:29.080 [2024-11-28 09:05:23.031869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.080 [2024-11-28 09:05:23.032003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.080 [2024-11-28 09:05:23.032021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:29.080 [2024-11-28 09:05:23.032035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:21:29.081 [2024-11-28 09:05:23.032051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.048133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.048184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:29.081 [2024-11-28 09:05:23.048195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.995 ms 00:21:29.081 [2024-11-28 09:05:23.048204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.048246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.048255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:29.081 [2024-11-28 09:05:23.048265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:29.081 [2024-11-28 09:05:23.048273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.049014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.049060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:29.081 [2024-11-28 09:05:23.049072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:21:29.081 [2024-11-28 09:05:23.049083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.049261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.049273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:29.081 [2024-11-28 09:05:23.049282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:21:29.081 [2024-11-28 09:05:23.049295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.058789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.058866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:29.081 [2024-11-28 09:05:23.058885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.469 ms 00:21:29.081 [2024-11-28 09:05:23.058897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.063639] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:29.081 [2024-11-28 09:05:23.063688] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:29.081 [2024-11-28 09:05:23.063708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.063717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:29.081 [2024-11-28 09:05:23.063728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.695 ms 00:21:29.081 [2024-11-28 09:05:23.063736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.080255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.080305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:29.081 [2024-11-28 09:05:23.080328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.463 ms 00:21:29.081 [2024-11-28 09:05:23.080337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.083877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.083921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:29.081 [2024-11-28 09:05:23.083933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.480 ms 00:21:29.081 [2024-11-28 09:05:23.083942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.086906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.086957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:29.081 [2024-11-28 09:05:23.086969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.918 ms 00:21:29.081 [2024-11-28 09:05:23.086977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.087333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.087357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:29.081 [2024-11-28 09:05:23.087369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:21:29.081 [2024-11-28 09:05:23.087378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.116127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.116197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:29.081 [2024-11-28 09:05:23.116212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.724 ms 00:21:29.081 [2024-11-28 09:05:23.116221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.124592] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:29.081 [2024-11-28 09:05:23.128487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.128528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:29.081 [2024-11-28 09:05:23.128547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.213 ms 00:21:29.081 [2024-11-28 09:05:23.128561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.128635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.128654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:29.081 [2024-11-28 09:05:23.128670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:29.081 [2024-11-28 09:05:23.128680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.130959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.131007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:29.081 [2024-11-28 09:05:23.131018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.239 ms 00:21:29.081 [2024-11-28 09:05:23.131032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.131068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.131083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:29.081 [2024-11-28 09:05:23.131092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:29.081 [2024-11-28 09:05:23.131104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.131157] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:29.081 [2024-11-28 09:05:23.131170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.131185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:29.081 [2024-11-28 09:05:23.131194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:29.081 [2024-11-28 09:05:23.131204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.137438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.137486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:29.081 [2024-11-28 09:05:23.137499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.205 ms 00:21:29.081 [2024-11-28 09:05:23.137508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.137625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.081 [2024-11-28 09:05:23.137638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:29.081 [2024-11-28 09:05:23.137648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:29.081 [2024-11-28 09:05:23.137658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.081 [2024-11-28 09:05:23.139094] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 161.297 ms, result 0 00:21:30.472  [2024-11-28T09:05:25.533Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-28T09:05:26.473Z] Copying: 22/1024 [MB] (10 MBps) [2024-11-28T09:05:27.406Z] Copying: 34/1024 [MB] (12 MBps) [2024-11-28T09:05:28.347Z] Copying: 53/1024 [MB] (18 MBps) [2024-11-28T09:05:29.723Z] Copying: 66/1024 [MB] (13 MBps) [2024-11-28T09:05:30.667Z] Copying: 82/1024 [MB] (15 MBps) [2024-11-28T09:05:31.605Z] Copying: 95/1024 [MB] (13 MBps) [2024-11-28T09:05:32.549Z] Copying: 107/1024 [MB] (11 MBps) [2024-11-28T09:05:33.496Z] Copying: 122/1024 [MB] (14 MBps) [2024-11-28T09:05:34.440Z] Copying: 133/1024 [MB] (11 MBps) [2024-11-28T09:05:35.379Z] Copying: 144/1024 [MB] (10 MBps) [2024-11-28T09:05:36.817Z] Copying: 159/1024 [MB] (15 MBps) [2024-11-28T09:05:37.406Z] Copying: 170/1024 [MB] (10 MBps) [2024-11-28T09:05:38.342Z] Copying: 185/1024 [MB] (15 MBps) [2024-11-28T09:05:39.726Z] Copying: 201/1024 [MB] (15 MBps) [2024-11-28T09:05:40.667Z] Copying: 211/1024 [MB] (10 MBps) [2024-11-28T09:05:41.610Z] Copying: 231/1024 [MB] (19 MBps) [2024-11-28T09:05:42.557Z] Copying: 246/1024 [MB] (14 MBps) [2024-11-28T09:05:43.502Z] Copying: 259/1024 [MB] (13 MBps) [2024-11-28T09:05:44.447Z] Copying: 270/1024 [MB] (10 MBps) [2024-11-28T09:05:45.391Z] Copying: 282/1024 [MB] (12 MBps) [2024-11-28T09:05:46.765Z] Copying: 299/1024 [MB] (16 MBps) [2024-11-28T09:05:47.698Z] Copying: 314/1024 [MB] (14 MBps) [2024-11-28T09:05:48.634Z] Copying: 331/1024 [MB] (16 MBps) [2024-11-28T09:05:49.569Z] Copying: 346/1024 [MB] (15 MBps) [2024-11-28T09:05:50.507Z] Copying: 362/1024 [MB] (15 MBps) [2024-11-28T09:05:51.451Z] Copying: 381/1024 [MB] (18 MBps) [2024-11-28T09:05:52.386Z] Copying: 394/1024 [MB] (13 MBps) [2024-11-28T09:05:53.768Z] Copying: 409/1024 [MB] (15 MBps) [2024-11-28T09:05:54.342Z] Copying: 424/1024 [MB] (14 MBps) [2024-11-28T09:05:55.721Z] Copying: 438/1024 [MB] (14 MBps) [2024-11-28T09:05:56.660Z] Copying: 454/1024 [MB] (16 MBps) [2024-11-28T09:05:57.600Z] Copying: 466/1024 [MB] (11 MBps) [2024-11-28T09:05:58.536Z] Copying: 492/1024 [MB] (26 MBps) [2024-11-28T09:05:59.472Z] Copying: 513/1024 [MB] (20 MBps) [2024-11-28T09:06:00.410Z] Copying: 526/1024 [MB] (13 MBps) [2024-11-28T09:06:01.350Z] Copying: 542/1024 [MB] (15 MBps) [2024-11-28T09:06:02.734Z] Copying: 560/1024 [MB] (17 MBps) [2024-11-28T09:06:03.677Z] Copying: 571/1024 [MB] (11 MBps) [2024-11-28T09:06:04.621Z] Copying: 583/1024 [MB] (12 MBps) [2024-11-28T09:06:05.616Z] Copying: 594/1024 [MB] (10 MBps) [2024-11-28T09:06:06.583Z] Copying: 605/1024 [MB] (11 MBps) [2024-11-28T09:06:07.527Z] Copying: 622/1024 [MB] (16 MBps) [2024-11-28T09:06:08.472Z] Copying: 636/1024 [MB] (13 MBps) [2024-11-28T09:06:09.418Z] Copying: 647/1024 [MB] (11 MBps) [2024-11-28T09:06:10.362Z] Copying: 658/1024 [MB] (11 MBps) [2024-11-28T09:06:11.751Z] Copying: 675/1024 [MB] (17 MBps) [2024-11-28T09:06:12.697Z] Copying: 687/1024 [MB] (11 MBps) [2024-11-28T09:06:13.639Z] Copying: 709/1024 [MB] (22 MBps) [2024-11-28T09:06:14.580Z] Copying: 729/1024 [MB] (20 MBps) [2024-11-28T09:06:15.525Z] Copying: 744/1024 [MB] (14 MBps) [2024-11-28T09:06:16.470Z] Copying: 758/1024 [MB] (14 MBps) [2024-11-28T09:06:17.415Z] Copying: 777/1024 [MB] (18 MBps) [2024-11-28T09:06:18.360Z] Copying: 790/1024 [MB] (13 MBps) [2024-11-28T09:06:19.748Z] Copying: 809/1024 [MB] (19 MBps) [2024-11-28T09:06:20.694Z] Copying: 828/1024 [MB] (19 MBps) [2024-11-28T09:06:21.639Z] Copying: 841/1024 [MB] (12 MBps) [2024-11-28T09:06:22.584Z] Copying: 854/1024 [MB] (13 MBps) [2024-11-28T09:06:23.528Z] Copying: 865/1024 [MB] (10 MBps) [2024-11-28T09:06:24.472Z] Copying: 876/1024 [MB] (10 MBps) [2024-11-28T09:06:25.416Z] Copying: 894/1024 [MB] (18 MBps) [2024-11-28T09:06:26.361Z] Copying: 914/1024 [MB] (20 MBps) [2024-11-28T09:06:27.754Z] Copying: 925/1024 [MB] (10 MBps) [2024-11-28T09:06:28.700Z] Copying: 937/1024 [MB] (11 MBps) [2024-11-28T09:06:29.648Z] Copying: 947/1024 [MB] (10 MBps) [2024-11-28T09:06:30.592Z] Copying: 958/1024 [MB] (10 MBps) [2024-11-28T09:06:31.536Z] Copying: 968/1024 [MB] (10 MBps) [2024-11-28T09:06:32.480Z] Copying: 979/1024 [MB] (10 MBps) [2024-11-28T09:06:33.423Z] Copying: 990/1024 [MB] (10 MBps) [2024-11-28T09:06:34.397Z] Copying: 1000/1024 [MB] (10 MBps) [2024-11-28T09:06:35.340Z] Copying: 1011/1024 [MB] (10 MBps) [2024-11-28T09:06:35.600Z] Copying: 1021/1024 [MB] (10 MBps) [2024-11-28T09:06:36.546Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-28 09:06:36.186769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.426 [2024-11-28 09:06:36.186888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:42.426 [2024-11-28 09:06:36.186908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:42.426 [2024-11-28 09:06:36.186918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.426 [2024-11-28 09:06:36.186946] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:42.426 [2024-11-28 09:06:36.187941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.426 [2024-11-28 09:06:36.187980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:42.426 [2024-11-28 09:06:36.187992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:22:42.426 [2024-11-28 09:06:36.188002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.426 [2024-11-28 09:06:36.188517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.426 [2024-11-28 09:06:36.188537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:42.426 [2024-11-28 09:06:36.188548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:22:42.426 [2024-11-28 09:06:36.188566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.426 [2024-11-28 09:06:36.194964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.426 [2024-11-28 09:06:36.195009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:42.426 [2024-11-28 09:06:36.195022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.380 ms 00:22:42.426 [2024-11-28 09:06:36.195032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.426 [2024-11-28 09:06:36.201451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.426 [2024-11-28 09:06:36.201501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:42.426 [2024-11-28 09:06:36.201513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:22:42.426 [2024-11-28 09:06:36.201522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.426 [2024-11-28 09:06:36.205655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.426 [2024-11-28 09:06:36.205696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:42.426 [2024-11-28 09:06:36.205707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.037 ms 00:22:42.426 [2024-11-28 09:06:36.205716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.426 [2024-11-28 09:06:36.211553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.426 [2024-11-28 09:06:36.211597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:42.426 [2024-11-28 09:06:36.211609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.792 ms 00:22:42.426 [2024-11-28 09:06:36.211618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.689 [2024-11-28 09:06:36.600718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.689 [2024-11-28 09:06:36.600811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:42.689 [2024-11-28 09:06:36.600828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 389.046 ms 00:22:42.689 [2024-11-28 09:06:36.600840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.689 [2024-11-28 09:06:36.604356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.689 [2024-11-28 09:06:36.604398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:42.689 [2024-11-28 09:06:36.604410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.495 ms 00:22:42.689 [2024-11-28 09:06:36.604418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.689 [2024-11-28 09:06:36.607362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.689 [2024-11-28 09:06:36.607417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:42.689 [2024-11-28 09:06:36.607427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.899 ms 00:22:42.689 [2024-11-28 09:06:36.607436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.689 [2024-11-28 09:06:36.609678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.690 [2024-11-28 09:06:36.609720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:42.690 [2024-11-28 09:06:36.609729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.201 ms 00:22:42.690 [2024-11-28 09:06:36.609737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.690 [2024-11-28 09:06:36.611876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.690 [2024-11-28 09:06:36.611914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:42.690 [2024-11-28 09:06:36.611924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.053 ms 00:22:42.690 [2024-11-28 09:06:36.611932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.690 [2024-11-28 09:06:36.611970] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:42.690 [2024-11-28 09:06:36.611988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:42.690 [2024-11-28 09:06:36.612000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:42.690 [2024-11-28 09:06:36.612666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:42.691 [2024-11-28 09:06:36.612884] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:42.691 [2024-11-28 09:06:36.612894] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbd333f7-9e85-49f2-b58d-d1b0fe0ae743 00:22:42.691 [2024-11-28 09:06:36.612902] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:42.691 [2024-11-28 09:06:36.612912] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 30400 00:22:42.691 [2024-11-28 09:06:36.612921] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 29440 00:22:42.691 [2024-11-28 09:06:36.612941] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0326 00:22:42.691 [2024-11-28 09:06:36.612950] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:42.691 [2024-11-28 09:06:36.612960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:42.691 [2024-11-28 09:06:36.612968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:42.691 [2024-11-28 09:06:36.612975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:42.691 [2024-11-28 09:06:36.612981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:42.691 [2024-11-28 09:06:36.612989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.691 [2024-11-28 09:06:36.612998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:42.691 [2024-11-28 09:06:36.613008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.021 ms 00:22:42.691 [2024-11-28 09:06:36.613016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.616136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.691 [2024-11-28 09:06:36.616180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:42.691 [2024-11-28 09:06:36.616192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.089 ms 00:22:42.691 [2024-11-28 09:06:36.616203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.616359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.691 [2024-11-28 09:06:36.616371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:42.691 [2024-11-28 09:06:36.616380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:22:42.691 [2024-11-28 09:06:36.616392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.625408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.691 [2024-11-28 09:06:36.625463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:42.691 [2024-11-28 09:06:36.625474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.691 [2024-11-28 09:06:36.625483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.625552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.691 [2024-11-28 09:06:36.625567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:42.691 [2024-11-28 09:06:36.625577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.691 [2024-11-28 09:06:36.625586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.625676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.691 [2024-11-28 09:06:36.625694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:42.691 [2024-11-28 09:06:36.625704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.691 [2024-11-28 09:06:36.625718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.625734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.691 [2024-11-28 09:06:36.625745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:42.691 [2024-11-28 09:06:36.625754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.691 [2024-11-28 09:06:36.625763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.644924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.691 [2024-11-28 09:06:36.644974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:42.691 [2024-11-28 09:06:36.644986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.691 [2024-11-28 09:06:36.644995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.660226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.691 [2024-11-28 09:06:36.660279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:42.691 [2024-11-28 09:06:36.660293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.691 [2024-11-28 09:06:36.660303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.660369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.691 [2024-11-28 09:06:36.660380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:42.691 [2024-11-28 09:06:36.660399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.691 [2024-11-28 09:06:36.660409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.660450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.691 [2024-11-28 09:06:36.660462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:42.691 [2024-11-28 09:06:36.660472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.691 [2024-11-28 09:06:36.660488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.660573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.691 [2024-11-28 09:06:36.660587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:42.691 [2024-11-28 09:06:36.660601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.691 [2024-11-28 09:06:36.660616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.660651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.691 [2024-11-28 09:06:36.660664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:42.691 [2024-11-28 09:06:36.660674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.691 [2024-11-28 09:06:36.660683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.660736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.691 [2024-11-28 09:06:36.660748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:42.691 [2024-11-28 09:06:36.660758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.691 [2024-11-28 09:06:36.660772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.660855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.691 [2024-11-28 09:06:36.660873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:42.691 [2024-11-28 09:06:36.660884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.691 [2024-11-28 09:06:36.660894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.691 [2024-11-28 09:06:36.661062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 474.253 ms, result 0 00:22:42.953 00:22:42.953 00:22:42.953 09:06:36 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:45.502 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:45.502 09:06:39 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:45.502 09:06:39 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:45.502 09:06:39 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:45.502 09:06:39 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:45.502 09:06:39 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:45.502 Process with pid 86579 is not found 00:22:45.502 09:06:39 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 86579 00:22:45.502 09:06:39 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86579 ']' 00:22:45.502 09:06:39 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86579 00:22:45.502 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86579) - No such process 00:22:45.502 09:06:39 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 86579 is not found' 00:22:45.502 Remove shared memory files 00:22:45.502 09:06:39 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:45.502 09:06:39 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:45.502 09:06:39 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:45.502 09:06:39 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:45.502 09:06:39 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:45.502 09:06:39 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:45.502 09:06:39 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:45.502 00:22:45.502 real 4m48.474s 00:22:45.502 user 4m34.975s 00:22:45.502 sys 0m12.768s 00:22:45.502 ************************************ 00:22:45.502 END TEST ftl_restore 00:22:45.502 ************************************ 00:22:45.502 09:06:39 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:45.502 09:06:39 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:45.502 09:06:39 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:45.502 09:06:39 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:45.502 09:06:39 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:45.502 09:06:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:45.502 ************************************ 00:22:45.502 START TEST ftl_dirty_shutdown 00:22:45.502 ************************************ 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:45.502 * Looking for test storage... 00:22:45.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:45.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.502 --rc genhtml_branch_coverage=1 00:22:45.502 --rc genhtml_function_coverage=1 00:22:45.502 --rc genhtml_legend=1 00:22:45.502 --rc geninfo_all_blocks=1 00:22:45.502 --rc geninfo_unexecuted_blocks=1 00:22:45.502 00:22:45.502 ' 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:45.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.502 --rc genhtml_branch_coverage=1 00:22:45.502 --rc genhtml_function_coverage=1 00:22:45.502 --rc genhtml_legend=1 00:22:45.502 --rc geninfo_all_blocks=1 00:22:45.502 --rc geninfo_unexecuted_blocks=1 00:22:45.502 00:22:45.502 ' 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:45.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.502 --rc genhtml_branch_coverage=1 00:22:45.502 --rc genhtml_function_coverage=1 00:22:45.502 --rc genhtml_legend=1 00:22:45.502 --rc geninfo_all_blocks=1 00:22:45.502 --rc geninfo_unexecuted_blocks=1 00:22:45.502 00:22:45.502 ' 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:45.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.502 --rc genhtml_branch_coverage=1 00:22:45.502 --rc genhtml_function_coverage=1 00:22:45.502 --rc genhtml_legend=1 00:22:45.502 --rc geninfo_all_blocks=1 00:22:45.502 --rc geninfo_unexecuted_blocks=1 00:22:45.502 00:22:45.502 ' 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.502 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89642 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89642 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89642 ']' 00:22:45.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:45.503 09:06:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:45.503 [2024-11-28 09:06:39.556561] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:45.503 [2024-11-28 09:06:39.556721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89642 ] 00:22:45.764 [2024-11-28 09:06:39.711011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.764 [2024-11-28 09:06:39.782969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.336 09:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.336 09:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:46.336 09:06:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:46.336 09:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:46.336 09:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:46.336 09:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:46.336 09:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:46.336 09:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:46.597 09:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:46.597 09:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:46.597 09:06:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:46.597 09:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:46.858 09:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:46.858 09:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:46.858 09:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:46.858 09:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:46.858 09:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:46.858 { 00:22:46.858 "name": "nvme0n1", 00:22:46.858 "aliases": [ 00:22:46.858 "a7d1192c-186e-47ed-b9a2-000bbaabadfd" 00:22:46.858 ], 00:22:46.858 "product_name": "NVMe disk", 00:22:46.858 "block_size": 4096, 00:22:46.858 "num_blocks": 1310720, 00:22:46.858 "uuid": "a7d1192c-186e-47ed-b9a2-000bbaabadfd", 00:22:46.858 "numa_id": -1, 00:22:46.858 "assigned_rate_limits": { 00:22:46.858 "rw_ios_per_sec": 0, 00:22:46.858 "rw_mbytes_per_sec": 0, 00:22:46.858 "r_mbytes_per_sec": 0, 00:22:46.858 "w_mbytes_per_sec": 0 00:22:46.858 }, 00:22:46.858 "claimed": true, 00:22:46.858 "claim_type": "read_many_write_one", 00:22:46.858 "zoned": false, 00:22:46.858 "supported_io_types": { 00:22:46.858 "read": true, 00:22:46.858 "write": true, 00:22:46.858 "unmap": true, 00:22:46.858 "flush": true, 00:22:46.858 "reset": true, 00:22:46.858 "nvme_admin": true, 00:22:46.858 "nvme_io": true, 00:22:46.858 "nvme_io_md": false, 00:22:46.858 "write_zeroes": true, 00:22:46.858 "zcopy": false, 00:22:46.858 "get_zone_info": false, 00:22:46.858 "zone_management": false, 00:22:46.858 "zone_append": false, 00:22:46.858 "compare": true, 00:22:46.858 "compare_and_write": false, 00:22:46.858 "abort": true, 00:22:46.858 "seek_hole": false, 00:22:46.858 "seek_data": false, 00:22:46.858 "copy": true, 00:22:46.858 "nvme_iov_md": false 00:22:46.858 }, 00:22:46.858 "driver_specific": { 00:22:46.858 "nvme": [ 00:22:46.858 { 00:22:46.858 "pci_address": "0000:00:11.0", 00:22:46.858 "trid": { 00:22:46.858 "trtype": "PCIe", 00:22:46.858 "traddr": "0000:00:11.0" 00:22:46.858 }, 00:22:46.858 "ctrlr_data": { 00:22:46.858 "cntlid": 0, 00:22:46.858 "vendor_id": "0x1b36", 00:22:46.858 "model_number": "QEMU NVMe Ctrl", 00:22:46.858 "serial_number": "12341", 00:22:46.858 "firmware_revision": "8.0.0", 00:22:46.858 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:46.858 "oacs": { 00:22:46.858 "security": 0, 00:22:46.858 "format": 1, 00:22:46.858 "firmware": 0, 00:22:46.858 "ns_manage": 1 00:22:46.858 }, 00:22:46.858 "multi_ctrlr": false, 00:22:46.858 "ana_reporting": false 00:22:46.858 }, 00:22:46.858 "vs": { 00:22:46.858 "nvme_version": "1.4" 00:22:46.858 }, 00:22:46.858 "ns_data": { 00:22:46.858 "id": 1, 00:22:46.858 "can_share": false 00:22:46.858 } 00:22:46.858 } 00:22:46.858 ], 00:22:46.858 "mp_policy": "active_passive" 00:22:46.858 } 00:22:46.858 } 00:22:46.858 ]' 00:22:46.858 09:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:46.858 09:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:46.858 09:06:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:47.119 09:06:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:47.119 09:06:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:47.119 09:06:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:47.119 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:47.119 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:47.119 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:47.119 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:47.119 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:47.119 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=496dffea-bf54-4f50-9f93-4501063ee912 00:22:47.119 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:47.119 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 496dffea-bf54-4f50-9f93-4501063ee912 00:22:47.379 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:47.639 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a76b09f7-8cb4-43d3-aa85-8a8dacbdd764 00:22:47.639 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a76b09f7-8cb4-43d3-aa85-8a8dacbdd764 00:22:47.900 09:06:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=6df87d7f-d2c2-4c1d-8f54-7a12e1421045 00:22:47.900 09:06:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:47.900 09:06:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6df87d7f-d2c2-4c1d-8f54-7a12e1421045 00:22:47.900 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:47.900 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:47.900 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=6df87d7f-d2c2-4c1d-8f54-7a12e1421045 00:22:47.900 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:47.900 09:06:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 6df87d7f-d2c2-4c1d-8f54-7a12e1421045 00:22:47.900 09:06:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=6df87d7f-d2c2-4c1d-8f54-7a12e1421045 00:22:47.900 09:06:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:47.900 09:06:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:47.900 09:06:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:47.900 09:06:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6df87d7f-d2c2-4c1d-8f54-7a12e1421045 00:22:48.161 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:48.161 { 00:22:48.161 "name": "6df87d7f-d2c2-4c1d-8f54-7a12e1421045", 00:22:48.161 "aliases": [ 00:22:48.161 "lvs/nvme0n1p0" 00:22:48.161 ], 00:22:48.161 "product_name": "Logical Volume", 00:22:48.161 "block_size": 4096, 00:22:48.161 "num_blocks": 26476544, 00:22:48.161 "uuid": "6df87d7f-d2c2-4c1d-8f54-7a12e1421045", 00:22:48.161 "assigned_rate_limits": { 00:22:48.161 "rw_ios_per_sec": 0, 00:22:48.161 "rw_mbytes_per_sec": 0, 00:22:48.161 "r_mbytes_per_sec": 0, 00:22:48.161 "w_mbytes_per_sec": 0 00:22:48.161 }, 00:22:48.161 "claimed": false, 00:22:48.161 "zoned": false, 00:22:48.161 "supported_io_types": { 00:22:48.161 "read": true, 00:22:48.161 "write": true, 00:22:48.161 "unmap": true, 00:22:48.161 "flush": false, 00:22:48.161 "reset": true, 00:22:48.161 "nvme_admin": false, 00:22:48.161 "nvme_io": false, 00:22:48.161 "nvme_io_md": false, 00:22:48.161 "write_zeroes": true, 00:22:48.161 "zcopy": false, 00:22:48.161 "get_zone_info": false, 00:22:48.161 "zone_management": false, 00:22:48.161 "zone_append": false, 00:22:48.161 "compare": false, 00:22:48.161 "compare_and_write": false, 00:22:48.161 "abort": false, 00:22:48.161 "seek_hole": true, 00:22:48.161 "seek_data": true, 00:22:48.161 "copy": false, 00:22:48.161 "nvme_iov_md": false 00:22:48.161 }, 00:22:48.161 "driver_specific": { 00:22:48.161 "lvol": { 00:22:48.161 "lvol_store_uuid": "a76b09f7-8cb4-43d3-aa85-8a8dacbdd764", 00:22:48.161 "base_bdev": "nvme0n1", 00:22:48.161 "thin_provision": true, 00:22:48.161 "num_allocated_clusters": 0, 00:22:48.161 "snapshot": false, 00:22:48.161 "clone": false, 00:22:48.161 "esnap_clone": false 00:22:48.161 } 00:22:48.161 } 00:22:48.161 } 00:22:48.161 ]' 00:22:48.161 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:48.161 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:48.161 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:48.161 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:48.161 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:48.161 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:48.161 09:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:48.161 09:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:48.161 09:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:48.421 09:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:48.421 09:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:48.421 09:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 6df87d7f-d2c2-4c1d-8f54-7a12e1421045 00:22:48.421 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=6df87d7f-d2c2-4c1d-8f54-7a12e1421045 00:22:48.422 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:48.422 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:48.422 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:48.422 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6df87d7f-d2c2-4c1d-8f54-7a12e1421045 00:22:48.683 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:48.683 { 00:22:48.683 "name": "6df87d7f-d2c2-4c1d-8f54-7a12e1421045", 00:22:48.683 "aliases": [ 00:22:48.683 "lvs/nvme0n1p0" 00:22:48.683 ], 00:22:48.683 "product_name": "Logical Volume", 00:22:48.683 "block_size": 4096, 00:22:48.683 "num_blocks": 26476544, 00:22:48.683 "uuid": "6df87d7f-d2c2-4c1d-8f54-7a12e1421045", 00:22:48.683 "assigned_rate_limits": { 00:22:48.683 "rw_ios_per_sec": 0, 00:22:48.683 "rw_mbytes_per_sec": 0, 00:22:48.683 "r_mbytes_per_sec": 0, 00:22:48.683 "w_mbytes_per_sec": 0 00:22:48.683 }, 00:22:48.683 "claimed": false, 00:22:48.683 "zoned": false, 00:22:48.683 "supported_io_types": { 00:22:48.683 "read": true, 00:22:48.683 "write": true, 00:22:48.683 "unmap": true, 00:22:48.683 "flush": false, 00:22:48.683 "reset": true, 00:22:48.683 "nvme_admin": false, 00:22:48.683 "nvme_io": false, 00:22:48.683 "nvme_io_md": false, 00:22:48.683 "write_zeroes": true, 00:22:48.683 "zcopy": false, 00:22:48.683 "get_zone_info": false, 00:22:48.683 "zone_management": false, 00:22:48.683 "zone_append": false, 00:22:48.683 "compare": false, 00:22:48.683 "compare_and_write": false, 00:22:48.683 "abort": false, 00:22:48.683 "seek_hole": true, 00:22:48.683 "seek_data": true, 00:22:48.683 "copy": false, 00:22:48.683 "nvme_iov_md": false 00:22:48.683 }, 00:22:48.683 "driver_specific": { 00:22:48.683 "lvol": { 00:22:48.683 "lvol_store_uuid": "a76b09f7-8cb4-43d3-aa85-8a8dacbdd764", 00:22:48.683 "base_bdev": "nvme0n1", 00:22:48.683 "thin_provision": true, 00:22:48.683 "num_allocated_clusters": 0, 00:22:48.683 "snapshot": false, 00:22:48.683 "clone": false, 00:22:48.683 "esnap_clone": false 00:22:48.683 } 00:22:48.683 } 00:22:48.683 } 00:22:48.683 ]' 00:22:48.683 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:48.683 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:48.683 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:48.683 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:48.683 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:48.684 09:06:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:48.684 09:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:48.684 09:06:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:48.945 09:06:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:48.945 09:06:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 6df87d7f-d2c2-4c1d-8f54-7a12e1421045 00:22:48.945 09:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=6df87d7f-d2c2-4c1d-8f54-7a12e1421045 00:22:48.946 09:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:48.946 09:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:48.946 09:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:48.946 09:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6df87d7f-d2c2-4c1d-8f54-7a12e1421045 00:22:49.205 09:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:49.205 { 00:22:49.205 "name": "6df87d7f-d2c2-4c1d-8f54-7a12e1421045", 00:22:49.205 "aliases": [ 00:22:49.205 "lvs/nvme0n1p0" 00:22:49.205 ], 00:22:49.205 "product_name": "Logical Volume", 00:22:49.205 "block_size": 4096, 00:22:49.205 "num_blocks": 26476544, 00:22:49.205 "uuid": "6df87d7f-d2c2-4c1d-8f54-7a12e1421045", 00:22:49.205 "assigned_rate_limits": { 00:22:49.205 "rw_ios_per_sec": 0, 00:22:49.205 "rw_mbytes_per_sec": 0, 00:22:49.205 "r_mbytes_per_sec": 0, 00:22:49.205 "w_mbytes_per_sec": 0 00:22:49.205 }, 00:22:49.205 "claimed": false, 00:22:49.205 "zoned": false, 00:22:49.205 "supported_io_types": { 00:22:49.205 "read": true, 00:22:49.205 "write": true, 00:22:49.205 "unmap": true, 00:22:49.205 "flush": false, 00:22:49.205 "reset": true, 00:22:49.205 "nvme_admin": false, 00:22:49.205 "nvme_io": false, 00:22:49.205 "nvme_io_md": false, 00:22:49.205 "write_zeroes": true, 00:22:49.205 "zcopy": false, 00:22:49.205 "get_zone_info": false, 00:22:49.205 "zone_management": false, 00:22:49.205 "zone_append": false, 00:22:49.205 "compare": false, 00:22:49.205 "compare_and_write": false, 00:22:49.205 "abort": false, 00:22:49.205 "seek_hole": true, 00:22:49.205 "seek_data": true, 00:22:49.205 "copy": false, 00:22:49.205 "nvme_iov_md": false 00:22:49.205 }, 00:22:49.205 "driver_specific": { 00:22:49.205 "lvol": { 00:22:49.205 "lvol_store_uuid": "a76b09f7-8cb4-43d3-aa85-8a8dacbdd764", 00:22:49.205 "base_bdev": "nvme0n1", 00:22:49.205 "thin_provision": true, 00:22:49.205 "num_allocated_clusters": 0, 00:22:49.205 "snapshot": false, 00:22:49.205 "clone": false, 00:22:49.205 "esnap_clone": false 00:22:49.205 } 00:22:49.205 } 00:22:49.205 } 00:22:49.205 ]' 00:22:49.205 09:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:49.205 09:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:49.205 09:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:49.465 09:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:49.465 09:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:49.465 09:06:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:49.465 09:06:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:49.465 09:06:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6df87d7f-d2c2-4c1d-8f54-7a12e1421045 --l2p_dram_limit 10' 00:22:49.465 09:06:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:49.465 09:06:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:49.465 09:06:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:49.465 09:06:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6df87d7f-d2c2-4c1d-8f54-7a12e1421045 --l2p_dram_limit 10 -c nvc0n1p0 00:22:49.465 [2024-11-28 09:06:43.564779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.465 [2024-11-28 09:06:43.564832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:49.465 [2024-11-28 09:06:43.564843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:49.465 [2024-11-28 09:06:43.564851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.465 [2024-11-28 09:06:43.564893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.465 [2024-11-28 09:06:43.564904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:49.465 [2024-11-28 09:06:43.564912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:49.465 [2024-11-28 09:06:43.564922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.465 [2024-11-28 09:06:43.564943] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:49.465 [2024-11-28 09:06:43.565205] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:49.465 [2024-11-28 09:06:43.565232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.465 [2024-11-28 09:06:43.565241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:49.465 [2024-11-28 09:06:43.565250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:22:49.465 [2024-11-28 09:06:43.565257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.465 [2024-11-28 09:06:43.565284] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d68cafb5-7169-49f9-9dc2-5db61c434e14 00:22:49.465 [2024-11-28 09:06:43.566551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.465 [2024-11-28 09:06:43.566577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:49.465 [2024-11-28 09:06:43.566586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:49.465 [2024-11-28 09:06:43.566593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.465 [2024-11-28 09:06:43.573445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.465 [2024-11-28 09:06:43.573476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:49.465 [2024-11-28 09:06:43.573485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.791 ms 00:22:49.465 [2024-11-28 09:06:43.573492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.465 [2024-11-28 09:06:43.573555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.465 [2024-11-28 09:06:43.573562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:49.465 [2024-11-28 09:06:43.573573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:49.465 [2024-11-28 09:06:43.573580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.465 [2024-11-28 09:06:43.573637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.465 [2024-11-28 09:06:43.573646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:49.465 [2024-11-28 09:06:43.573654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:49.465 [2024-11-28 09:06:43.573660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.465 [2024-11-28 09:06:43.573679] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:49.465 [2024-11-28 09:06:43.575332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.465 [2024-11-28 09:06:43.575361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:49.465 [2024-11-28 09:06:43.575370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.659 ms 00:22:49.465 [2024-11-28 09:06:43.575380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.465 [2024-11-28 09:06:43.575407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.465 [2024-11-28 09:06:43.575415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:49.465 [2024-11-28 09:06:43.575424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:49.465 [2024-11-28 09:06:43.575434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.465 [2024-11-28 09:06:43.575454] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:49.465 [2024-11-28 09:06:43.575573] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:49.465 [2024-11-28 09:06:43.575583] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:49.465 [2024-11-28 09:06:43.575595] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:49.465 [2024-11-28 09:06:43.575604] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:49.465 [2024-11-28 09:06:43.575613] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:49.465 [2024-11-28 09:06:43.575619] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:49.465 [2024-11-28 09:06:43.575630] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:49.465 [2024-11-28 09:06:43.575636] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:49.465 [2024-11-28 09:06:43.575643] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:49.465 [2024-11-28 09:06:43.575650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.465 [2024-11-28 09:06:43.575658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:49.465 [2024-11-28 09:06:43.575664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:22:49.465 [2024-11-28 09:06:43.575672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.465 [2024-11-28 09:06:43.575737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.465 [2024-11-28 09:06:43.575755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:49.465 [2024-11-28 09:06:43.575762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:49.465 [2024-11-28 09:06:43.575769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.465 [2024-11-28 09:06:43.575851] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:49.465 [2024-11-28 09:06:43.575864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:49.465 [2024-11-28 09:06:43.575870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:49.465 [2024-11-28 09:06:43.575878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.465 [2024-11-28 09:06:43.575888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:49.465 [2024-11-28 09:06:43.575895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:49.465 [2024-11-28 09:06:43.575900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:49.465 [2024-11-28 09:06:43.575907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:49.465 [2024-11-28 09:06:43.575914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:49.465 [2024-11-28 09:06:43.575921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:49.465 [2024-11-28 09:06:43.575926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:49.465 [2024-11-28 09:06:43.575932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:49.465 [2024-11-28 09:06:43.575937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:49.465 [2024-11-28 09:06:43.575949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:49.465 [2024-11-28 09:06:43.575955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:49.465 [2024-11-28 09:06:43.575962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.465 [2024-11-28 09:06:43.575967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:49.465 [2024-11-28 09:06:43.575974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:49.465 [2024-11-28 09:06:43.575979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.465 [2024-11-28 09:06:43.575985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:49.465 [2024-11-28 09:06:43.575991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:49.466 [2024-11-28 09:06:43.575998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.466 [2024-11-28 09:06:43.576003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:49.466 [2024-11-28 09:06:43.576009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:49.466 [2024-11-28 09:06:43.576014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.466 [2024-11-28 09:06:43.576021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:49.466 [2024-11-28 09:06:43.576026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:49.466 [2024-11-28 09:06:43.576032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.466 [2024-11-28 09:06:43.576037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:49.466 [2024-11-28 09:06:43.576047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:49.466 [2024-11-28 09:06:43.576053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.466 [2024-11-28 09:06:43.576060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:49.466 [2024-11-28 09:06:43.576065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:49.466 [2024-11-28 09:06:43.576072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:49.466 [2024-11-28 09:06:43.576077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:49.466 [2024-11-28 09:06:43.576083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:49.466 [2024-11-28 09:06:43.576089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:49.466 [2024-11-28 09:06:43.576096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:49.466 [2024-11-28 09:06:43.576101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:49.466 [2024-11-28 09:06:43.576107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.466 [2024-11-28 09:06:43.576112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:49.466 [2024-11-28 09:06:43.576118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:49.466 [2024-11-28 09:06:43.576123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.466 [2024-11-28 09:06:43.576130] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:49.466 [2024-11-28 09:06:43.576136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:49.466 [2024-11-28 09:06:43.576150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:49.466 [2024-11-28 09:06:43.576156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.466 [2024-11-28 09:06:43.576164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:49.466 [2024-11-28 09:06:43.576169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:49.466 [2024-11-28 09:06:43.576175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:49.466 [2024-11-28 09:06:43.576181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:49.466 [2024-11-28 09:06:43.576187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:49.466 [2024-11-28 09:06:43.576193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:49.466 [2024-11-28 09:06:43.576202] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:49.466 [2024-11-28 09:06:43.576209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:49.466 [2024-11-28 09:06:43.576218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:49.466 [2024-11-28 09:06:43.576225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:49.466 [2024-11-28 09:06:43.576232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:49.466 [2024-11-28 09:06:43.576238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:49.466 [2024-11-28 09:06:43.576245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:49.466 [2024-11-28 09:06:43.576250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:49.466 [2024-11-28 09:06:43.576259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:49.466 [2024-11-28 09:06:43.576264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:49.466 [2024-11-28 09:06:43.576271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:49.466 [2024-11-28 09:06:43.576277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:49.466 [2024-11-28 09:06:43.576284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:49.466 [2024-11-28 09:06:43.576289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:49.466 [2024-11-28 09:06:43.576297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:49.466 [2024-11-28 09:06:43.576303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:49.466 [2024-11-28 09:06:43.576310] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:49.466 [2024-11-28 09:06:43.576318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:49.466 [2024-11-28 09:06:43.576326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:49.466 [2024-11-28 09:06:43.576333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:49.466 [2024-11-28 09:06:43.576339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:49.466 [2024-11-28 09:06:43.576345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:49.466 [2024-11-28 09:06:43.576352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.466 [2024-11-28 09:06:43.576358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:49.466 [2024-11-28 09:06:43.576376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:22:49.466 [2024-11-28 09:06:43.576382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.466 [2024-11-28 09:06:43.576415] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:49.466 [2024-11-28 09:06:43.576428] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:52.754 [2024-11-28 09:06:46.666410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.754 [2024-11-28 09:06:46.666533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:52.754 [2024-11-28 09:06:46.666561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3089.970 ms 00:22:52.754 [2024-11-28 09:06:46.666571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.754 [2024-11-28 09:06:46.686410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.754 [2024-11-28 09:06:46.686489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:52.754 [2024-11-28 09:06:46.686509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.690 ms 00:22:52.754 [2024-11-28 09:06:46.686519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.754 [2024-11-28 09:06:46.686675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.754 [2024-11-28 09:06:46.686688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:52.754 [2024-11-28 09:06:46.686705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:22:52.754 [2024-11-28 09:06:46.686714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.754 [2024-11-28 09:06:46.703334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.754 [2024-11-28 09:06:46.703398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:52.754 [2024-11-28 09:06:46.703417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.552 ms 00:22:52.754 [2024-11-28 09:06:46.703428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.754 [2024-11-28 09:06:46.703472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.754 [2024-11-28 09:06:46.703488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:52.754 [2024-11-28 09:06:46.703501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:52.754 [2024-11-28 09:06:46.703511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.754 [2024-11-28 09:06:46.704277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.754 [2024-11-28 09:06:46.704323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:52.754 [2024-11-28 09:06:46.704337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:22:52.754 [2024-11-28 09:06:46.704347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.754 [2024-11-28 09:06:46.704479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.754 [2024-11-28 09:06:46.704489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:52.754 [2024-11-28 09:06:46.704506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:22:52.754 [2024-11-28 09:06:46.704516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.754 [2024-11-28 09:06:46.729694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.754 [2024-11-28 09:06:46.729796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:52.754 [2024-11-28 09:06:46.729880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.138 ms 00:22:52.754 [2024-11-28 09:06:46.729904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.754 [2024-11-28 09:06:46.741432] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:52.754 [2024-11-28 09:06:46.746551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.754 [2024-11-28 09:06:46.746605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:52.754 [2024-11-28 09:06:46.746617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.416 ms 00:22:52.754 [2024-11-28 09:06:46.746628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.754 [2024-11-28 09:06:46.843947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.754 [2024-11-28 09:06:46.844028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:52.754 [2024-11-28 09:06:46.844043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.285 ms 00:22:52.754 [2024-11-28 09:06:46.844060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.754 [2024-11-28 09:06:46.844400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.754 [2024-11-28 09:06:46.844433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:52.754 [2024-11-28 09:06:46.844444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:22:52.755 [2024-11-28 09:06:46.844457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.755 [2024-11-28 09:06:46.850600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.755 [2024-11-28 09:06:46.850658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:52.755 [2024-11-28 09:06:46.850671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.102 ms 00:22:52.755 [2024-11-28 09:06:46.850684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.755 [2024-11-28 09:06:46.855926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.755 [2024-11-28 09:06:46.855974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:52.755 [2024-11-28 09:06:46.855986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.188 ms 00:22:52.755 [2024-11-28 09:06:46.855996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.755 [2024-11-28 09:06:46.856366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.755 [2024-11-28 09:06:46.856399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:52.755 [2024-11-28 09:06:46.856411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:22:52.755 [2024-11-28 09:06:46.856426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.015 [2024-11-28 09:06:46.906604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.015 [2024-11-28 09:06:46.906668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:53.015 [2024-11-28 09:06:46.906688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.156 ms 00:22:53.015 [2024-11-28 09:06:46.906700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.015 [2024-11-28 09:06:46.914846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.015 [2024-11-28 09:06:46.914905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:53.015 [2024-11-28 09:06:46.914917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.078 ms 00:22:53.015 [2024-11-28 09:06:46.914930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.015 [2024-11-28 09:06:46.920768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.015 [2024-11-28 09:06:46.920839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:53.015 [2024-11-28 09:06:46.920850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.790 ms 00:22:53.015 [2024-11-28 09:06:46.920861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.015 [2024-11-28 09:06:46.927256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.015 [2024-11-28 09:06:46.927316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:53.015 [2024-11-28 09:06:46.927327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.345 ms 00:22:53.015 [2024-11-28 09:06:46.927341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.015 [2024-11-28 09:06:46.927396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.016 [2024-11-28 09:06:46.927409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:53.016 [2024-11-28 09:06:46.927420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:53.016 [2024-11-28 09:06:46.927432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.016 [2024-11-28 09:06:46.927534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.016 [2024-11-28 09:06:46.927551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:53.016 [2024-11-28 09:06:46.927560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:22:53.016 [2024-11-28 09:06:46.927582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.016 [2024-11-28 09:06:46.929078] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3363.678 ms, result 0 00:22:53.016 { 00:22:53.016 "name": "ftl0", 00:22:53.016 "uuid": "d68cafb5-7169-49f9-9dc2-5db61c434e14" 00:22:53.016 } 00:22:53.016 09:06:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:53.016 09:06:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:53.277 09:06:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:53.277 09:06:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:53.277 09:06:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:53.277 /dev/nbd0 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:53.537 1+0 records in 00:22:53.537 1+0 records out 00:22:53.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550023 s, 7.4 MB/s 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:53.537 09:06:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:22:53.538 09:06:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:53.538 [2024-11-28 09:06:47.487721] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:53.538 [2024-11-28 09:06:47.487889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89784 ] 00:22:53.538 [2024-11-28 09:06:47.639354] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.799 [2024-11-28 09:06:47.711843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.736  [2024-11-28T09:06:50.233Z] Copying: 188/1024 [MB] (188 MBps) [2024-11-28T09:06:51.168Z] Copying: 381/1024 [MB] (193 MBps) [2024-11-28T09:06:52.105Z] Copying: 629/1024 [MB] (248 MBps) [2024-11-28T09:06:52.672Z] Copying: 877/1024 [MB] (247 MBps) [2024-11-28T09:06:52.672Z] Copying: 1024/1024 [MB] (average 222 MBps) 00:22:58.552 00:22:58.552 09:06:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:01.082 09:06:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:01.082 [2024-11-28 09:06:54.843982] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:01.082 [2024-11-28 09:06:54.844094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89865 ] 00:23:01.082 [2024-11-28 09:06:54.993073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.082 [2024-11-28 09:06:55.035581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.018  [2024-11-28T09:06:57.114Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-28T09:06:58.502Z] Copying: 31/1024 [MB] (13 MBps) [2024-11-28T09:06:59.447Z] Copying: 51/1024 [MB] (19 MBps) [2024-11-28T09:07:00.388Z] Copying: 70/1024 [MB] (19 MBps) [2024-11-28T09:07:01.331Z] Copying: 91/1024 [MB] (20 MBps) [2024-11-28T09:07:02.272Z] Copying: 110/1024 [MB] (19 MBps) [2024-11-28T09:07:03.214Z] Copying: 135/1024 [MB] (24 MBps) [2024-11-28T09:07:04.206Z] Copying: 152/1024 [MB] (17 MBps) [2024-11-28T09:07:05.151Z] Copying: 172/1024 [MB] (19 MBps) [2024-11-28T09:07:06.532Z] Copying: 190/1024 [MB] (18 MBps) [2024-11-28T09:07:07.101Z] Copying: 207/1024 [MB] (17 MBps) [2024-11-28T09:07:08.482Z] Copying: 228/1024 [MB] (21 MBps) [2024-11-28T09:07:09.422Z] Copying: 254/1024 [MB] (25 MBps) [2024-11-28T09:07:10.365Z] Copying: 274/1024 [MB] (20 MBps) [2024-11-28T09:07:11.307Z] Copying: 292/1024 [MB] (18 MBps) [2024-11-28T09:07:12.250Z] Copying: 311/1024 [MB] (18 MBps) [2024-11-28T09:07:13.191Z] Copying: 330/1024 [MB] (19 MBps) [2024-11-28T09:07:14.126Z] Copying: 349/1024 [MB] (18 MBps) [2024-11-28T09:07:15.511Z] Copying: 370/1024 [MB] (21 MBps) [2024-11-28T09:07:16.454Z] Copying: 387/1024 [MB] (17 MBps) [2024-11-28T09:07:17.395Z] Copying: 403/1024 [MB] (16 MBps) [2024-11-28T09:07:18.335Z] Copying: 424/1024 [MB] (20 MBps) [2024-11-28T09:07:19.278Z] Copying: 446/1024 [MB] (21 MBps) [2024-11-28T09:07:20.218Z] Copying: 467/1024 [MB] (21 MBps) [2024-11-28T09:07:21.162Z] Copying: 488/1024 [MB] (20 MBps) [2024-11-28T09:07:22.105Z] Copying: 508/1024 [MB] (20 MBps) [2024-11-28T09:07:23.486Z] Copying: 525/1024 [MB] (17 MBps) [2024-11-28T09:07:24.425Z] Copying: 546/1024 [MB] (20 MBps) [2024-11-28T09:07:25.364Z] Copying: 568/1024 [MB] (21 MBps) [2024-11-28T09:07:26.304Z] Copying: 593/1024 [MB] (25 MBps) [2024-11-28T09:07:27.244Z] Copying: 619/1024 [MB] (25 MBps) [2024-11-28T09:07:28.187Z] Copying: 641/1024 [MB] (22 MBps) [2024-11-28T09:07:29.126Z] Copying: 662/1024 [MB] (20 MBps) [2024-11-28T09:07:30.506Z] Copying: 690/1024 [MB] (28 MBps) [2024-11-28T09:07:31.115Z] Copying: 712/1024 [MB] (22 MBps) [2024-11-28T09:07:32.526Z] Copying: 737/1024 [MB] (24 MBps) [2024-11-28T09:07:33.468Z] Copying: 759/1024 [MB] (22 MBps) [2024-11-28T09:07:34.409Z] Copying: 785/1024 [MB] (25 MBps) [2024-11-28T09:07:35.350Z] Copying: 809/1024 [MB] (24 MBps) [2024-11-28T09:07:36.308Z] Copying: 828/1024 [MB] (19 MBps) [2024-11-28T09:07:37.250Z] Copying: 850/1024 [MB] (21 MBps) [2024-11-28T09:07:38.192Z] Copying: 871/1024 [MB] (21 MBps) [2024-11-28T09:07:39.132Z] Copying: 892/1024 [MB] (21 MBps) [2024-11-28T09:07:40.517Z] Copying: 914/1024 [MB] (22 MBps) [2024-11-28T09:07:41.458Z] Copying: 935/1024 [MB] (20 MBps) [2024-11-28T09:07:42.396Z] Copying: 954/1024 [MB] (19 MBps) [2024-11-28T09:07:43.335Z] Copying: 974/1024 [MB] (20 MBps) [2024-11-28T09:07:44.278Z] Copying: 995/1024 [MB] (20 MBps) [2024-11-28T09:07:44.847Z] Copying: 1012/1024 [MB] (17 MBps) [2024-11-28T09:07:45.105Z] Copying: 1024/1024 [MB] (average 20 MBps) 00:23:50.985 00:23:50.985 09:07:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:50.986 09:07:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:51.244 09:07:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:51.244 [2024-11-28 09:07:45.345489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.244 [2024-11-28 09:07:45.345558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:51.244 [2024-11-28 09:07:45.345580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:51.244 [2024-11-28 09:07:45.345589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.244 [2024-11-28 09:07:45.345633] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:51.244 [2024-11-28 09:07:45.346608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.244 [2024-11-28 09:07:45.346662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:51.244 [2024-11-28 09:07:45.346676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:23:51.244 [2024-11-28 09:07:45.346694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.244 [2024-11-28 09:07:45.349457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.244 [2024-11-28 09:07:45.349515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:51.244 [2024-11-28 09:07:45.349528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:23:51.244 [2024-11-28 09:07:45.349539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.504 [2024-11-28 09:07:45.369614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.504 [2024-11-28 09:07:45.369669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:51.504 [2024-11-28 09:07:45.369683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.052 ms 00:23:51.504 [2024-11-28 09:07:45.369694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.504 [2024-11-28 09:07:45.375967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.504 [2024-11-28 09:07:45.376014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:51.504 [2024-11-28 09:07:45.376026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.226 ms 00:23:51.504 [2024-11-28 09:07:45.376037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.504 [2024-11-28 09:07:45.378442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.504 [2024-11-28 09:07:45.378503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:51.504 [2024-11-28 09:07:45.378514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.320 ms 00:23:51.504 [2024-11-28 09:07:45.378526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.504 [2024-11-28 09:07:45.384454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.504 [2024-11-28 09:07:45.384515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:51.504 [2024-11-28 09:07:45.384530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.879 ms 00:23:51.504 [2024-11-28 09:07:45.384541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.504 [2024-11-28 09:07:45.384699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.504 [2024-11-28 09:07:45.384730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:51.504 [2024-11-28 09:07:45.384742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:23:51.504 [2024-11-28 09:07:45.384756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.504 [2024-11-28 09:07:45.387453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.504 [2024-11-28 09:07:45.387509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:51.504 [2024-11-28 09:07:45.387519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.676 ms 00:23:51.504 [2024-11-28 09:07:45.387530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.504 [2024-11-28 09:07:45.389722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.504 [2024-11-28 09:07:45.389780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:51.504 [2024-11-28 09:07:45.389790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.145 ms 00:23:51.504 [2024-11-28 09:07:45.389817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.504 [2024-11-28 09:07:45.391514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.504 [2024-11-28 09:07:45.391568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:51.504 [2024-11-28 09:07:45.391579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.649 ms 00:23:51.504 [2024-11-28 09:07:45.391589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.504 [2024-11-28 09:07:45.393178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.504 [2024-11-28 09:07:45.393232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:51.504 [2024-11-28 09:07:45.393243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.513 ms 00:23:51.504 [2024-11-28 09:07:45.393255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.504 [2024-11-28 09:07:45.393301] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:51.504 [2024-11-28 09:07:45.393322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:51.504 [2024-11-28 09:07:45.393870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.393883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.393890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.393901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.393920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.393929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.393937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.393948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.393956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.393966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.393976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.393986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.393993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:51.505 [2024-11-28 09:07:45.394363] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:51.505 [2024-11-28 09:07:45.394373] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d68cafb5-7169-49f9-9dc2-5db61c434e14 00:23:51.505 [2024-11-28 09:07:45.394386] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:51.505 [2024-11-28 09:07:45.394395] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:51.505 [2024-11-28 09:07:45.394404] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:51.505 [2024-11-28 09:07:45.394413] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:51.505 [2024-11-28 09:07:45.394424] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:51.505 [2024-11-28 09:07:45.394432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:51.505 [2024-11-28 09:07:45.394442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:51.505 [2024-11-28 09:07:45.394450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:51.505 [2024-11-28 09:07:45.394460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:51.505 [2024-11-28 09:07:45.394468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.505 [2024-11-28 09:07:45.394485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:51.505 [2024-11-28 09:07:45.394495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:23:51.505 [2024-11-28 09:07:45.394505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.505 [2024-11-28 09:07:45.397313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.505 [2024-11-28 09:07:45.397363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:51.505 [2024-11-28 09:07:45.397374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.787 ms 00:23:51.505 [2024-11-28 09:07:45.397385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.505 [2024-11-28 09:07:45.397521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.505 [2024-11-28 09:07:45.397543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:51.505 [2024-11-28 09:07:45.397553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:23:51.505 [2024-11-28 09:07:45.397564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.505 [2024-11-28 09:07:45.407352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.505 [2024-11-28 09:07:45.407404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:51.505 [2024-11-28 09:07:45.407415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.505 [2024-11-28 09:07:45.407428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.505 [2024-11-28 09:07:45.407495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.505 [2024-11-28 09:07:45.407507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:51.505 [2024-11-28 09:07:45.407516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.505 [2024-11-28 09:07:45.407526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.505 [2024-11-28 09:07:45.407617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.505 [2024-11-28 09:07:45.407638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:51.505 [2024-11-28 09:07:45.407647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.505 [2024-11-28 09:07:45.407658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.505 [2024-11-28 09:07:45.407675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.505 [2024-11-28 09:07:45.407686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:51.505 [2024-11-28 09:07:45.407695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.505 [2024-11-28 09:07:45.407706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.505 [2024-11-28 09:07:45.421423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.505 [2024-11-28 09:07:45.421465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:51.505 [2024-11-28 09:07:45.421474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.505 [2024-11-28 09:07:45.421482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.505 [2024-11-28 09:07:45.431717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.505 [2024-11-28 09:07:45.431757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:51.505 [2024-11-28 09:07:45.431765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.505 [2024-11-28 09:07:45.431775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.505 [2024-11-28 09:07:45.431856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.505 [2024-11-28 09:07:45.431870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:51.505 [2024-11-28 09:07:45.431881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.505 [2024-11-28 09:07:45.431889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.505 [2024-11-28 09:07:45.431929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.505 [2024-11-28 09:07:45.431939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:51.506 [2024-11-28 09:07:45.431945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.506 [2024-11-28 09:07:45.431953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.506 [2024-11-28 09:07:45.432020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.506 [2024-11-28 09:07:45.432033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:51.506 [2024-11-28 09:07:45.432040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.506 [2024-11-28 09:07:45.432048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.506 [2024-11-28 09:07:45.432076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.506 [2024-11-28 09:07:45.432087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:51.506 [2024-11-28 09:07:45.432093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.506 [2024-11-28 09:07:45.432101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.506 [2024-11-28 09:07:45.432136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.506 [2024-11-28 09:07:45.432151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:51.506 [2024-11-28 09:07:45.432157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.506 [2024-11-28 09:07:45.432166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.506 [2024-11-28 09:07:45.432208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.506 [2024-11-28 09:07:45.432217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:51.506 [2024-11-28 09:07:45.432224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.506 [2024-11-28 09:07:45.432232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.506 [2024-11-28 09:07:45.432356] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 86.848 ms, result 0 00:23:51.506 true 00:23:51.506 09:07:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89642 00:23:51.506 09:07:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89642 00:23:51.506 09:07:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:51.506 [2024-11-28 09:07:45.523856] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:51.506 [2024-11-28 09:07:45.523977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90395 ] 00:23:51.767 [2024-11-28 09:07:45.672305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.767 [2024-11-28 09:07:45.717355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.706  [2024-11-28T09:07:48.203Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-28T09:07:49.137Z] Copying: 426/1024 [MB] (235 MBps) [2024-11-28T09:07:50.072Z] Copying: 683/1024 [MB] (257 MBps) [2024-11-28T09:07:50.330Z] Copying: 936/1024 [MB] (252 MBps) [2024-11-28T09:07:50.330Z] Copying: 1024/1024 [MB] (average 235 MBps) 00:23:56.210 00:23:56.210 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89642 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:56.210 09:07:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:56.468 [2024-11-28 09:07:50.386350] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:56.468 [2024-11-28 09:07:50.386468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90443 ] 00:23:56.468 [2024-11-28 09:07:50.533358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.468 [2024-11-28 09:07:50.580170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.726 [2024-11-28 09:07:50.678617] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:56.726 [2024-11-28 09:07:50.678681] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:56.726 [2024-11-28 09:07:50.740852] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:56.726 [2024-11-28 09:07:50.741086] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:56.726 [2024-11-28 09:07:50.741292] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:57.295 [2024-11-28 09:07:51.180436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.295 [2024-11-28 09:07:51.180474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:57.295 [2024-11-28 09:07:51.180490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:57.295 [2024-11-28 09:07:51.180498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.295 [2024-11-28 09:07:51.180546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.295 [2024-11-28 09:07:51.180559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:57.295 [2024-11-28 09:07:51.180571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:57.295 [2024-11-28 09:07:51.180579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.295 [2024-11-28 09:07:51.180600] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:57.295 [2024-11-28 09:07:51.180865] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:57.295 [2024-11-28 09:07:51.180886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.295 [2024-11-28 09:07:51.180898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:57.295 [2024-11-28 09:07:51.180906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:23:57.295 [2024-11-28 09:07:51.180914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.295 [2024-11-28 09:07:51.182317] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:57.295 [2024-11-28 09:07:51.185414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.295 [2024-11-28 09:07:51.185449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:57.295 [2024-11-28 09:07:51.185464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.098 ms 00:23:57.295 [2024-11-28 09:07:51.185471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.295 [2024-11-28 09:07:51.185529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.295 [2024-11-28 09:07:51.185539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:57.295 [2024-11-28 09:07:51.185551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:57.295 [2024-11-28 09:07:51.185559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.295 [2024-11-28 09:07:51.192538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.296 [2024-11-28 09:07:51.192565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:57.296 [2024-11-28 09:07:51.192578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.932 ms 00:23:57.296 [2024-11-28 09:07:51.192585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.296 [2024-11-28 09:07:51.192678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.296 [2024-11-28 09:07:51.192687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:57.296 [2024-11-28 09:07:51.192695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:57.296 [2024-11-28 09:07:51.192703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.296 [2024-11-28 09:07:51.192751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.296 [2024-11-28 09:07:51.192764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:57.296 [2024-11-28 09:07:51.192774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:57.296 [2024-11-28 09:07:51.192782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.296 [2024-11-28 09:07:51.192820] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:57.296 [2024-11-28 09:07:51.194603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.296 [2024-11-28 09:07:51.194627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:57.296 [2024-11-28 09:07:51.194636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.791 ms 00:23:57.296 [2024-11-28 09:07:51.194644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.296 [2024-11-28 09:07:51.194680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.296 [2024-11-28 09:07:51.194692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:57.296 [2024-11-28 09:07:51.194701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:57.296 [2024-11-28 09:07:51.194708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.296 [2024-11-28 09:07:51.194735] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:57.296 [2024-11-28 09:07:51.194755] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:57.296 [2024-11-28 09:07:51.194791] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:57.296 [2024-11-28 09:07:51.194832] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:57.296 [2024-11-28 09:07:51.194939] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:57.296 [2024-11-28 09:07:51.194951] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:57.296 [2024-11-28 09:07:51.194964] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:57.296 [2024-11-28 09:07:51.194975] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:57.296 [2024-11-28 09:07:51.194985] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:57.296 [2024-11-28 09:07:51.194993] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:57.296 [2024-11-28 09:07:51.195004] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:57.296 [2024-11-28 09:07:51.195012] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:57.296 [2024-11-28 09:07:51.195020] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:57.296 [2024-11-28 09:07:51.195028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.296 [2024-11-28 09:07:51.195037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:57.296 [2024-11-28 09:07:51.195045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:23:57.296 [2024-11-28 09:07:51.195053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.296 [2024-11-28 09:07:51.195135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.296 [2024-11-28 09:07:51.195144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:57.296 [2024-11-28 09:07:51.195153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:57.296 [2024-11-28 09:07:51.195166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.296 [2024-11-28 09:07:51.195268] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:57.296 [2024-11-28 09:07:51.195280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:57.296 [2024-11-28 09:07:51.195292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:57.296 [2024-11-28 09:07:51.195301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.296 [2024-11-28 09:07:51.195311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:57.296 [2024-11-28 09:07:51.195318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:57.296 [2024-11-28 09:07:51.195326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:57.296 [2024-11-28 09:07:51.195336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:57.296 [2024-11-28 09:07:51.195344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:57.296 [2024-11-28 09:07:51.195352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:57.296 [2024-11-28 09:07:51.195360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:57.296 [2024-11-28 09:07:51.195368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:57.296 [2024-11-28 09:07:51.195376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:57.296 [2024-11-28 09:07:51.195384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:57.296 [2024-11-28 09:07:51.195392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:57.296 [2024-11-28 09:07:51.195400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.296 [2024-11-28 09:07:51.195412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:57.296 [2024-11-28 09:07:51.195422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:57.296 [2024-11-28 09:07:51.195430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.296 [2024-11-28 09:07:51.195440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:57.296 [2024-11-28 09:07:51.195448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:57.296 [2024-11-28 09:07:51.195457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.296 [2024-11-28 09:07:51.195465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:57.296 [2024-11-28 09:07:51.195473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:57.296 [2024-11-28 09:07:51.195480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.296 [2024-11-28 09:07:51.195488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:57.296 [2024-11-28 09:07:51.195496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:57.296 [2024-11-28 09:07:51.195504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.296 [2024-11-28 09:07:51.195511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:57.296 [2024-11-28 09:07:51.195519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:57.296 [2024-11-28 09:07:51.195527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.296 [2024-11-28 09:07:51.195535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:57.296 [2024-11-28 09:07:51.195546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:57.296 [2024-11-28 09:07:51.195554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:57.296 [2024-11-28 09:07:51.195561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:57.296 [2024-11-28 09:07:51.195569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:57.296 [2024-11-28 09:07:51.195576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:57.296 [2024-11-28 09:07:51.195584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:57.296 [2024-11-28 09:07:51.195591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:57.296 [2024-11-28 09:07:51.195599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.296 [2024-11-28 09:07:51.195607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:57.296 [2024-11-28 09:07:51.195615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:57.296 [2024-11-28 09:07:51.195623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.296 [2024-11-28 09:07:51.195630] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:57.296 [2024-11-28 09:07:51.195639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:57.296 [2024-11-28 09:07:51.195654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:57.296 [2024-11-28 09:07:51.195661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.296 [2024-11-28 09:07:51.195672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:57.296 [2024-11-28 09:07:51.195680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:57.296 [2024-11-28 09:07:51.195686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:57.296 [2024-11-28 09:07:51.195693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:57.296 [2024-11-28 09:07:51.195707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:57.296 [2024-11-28 09:07:51.195714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:57.296 [2024-11-28 09:07:51.195723] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:57.296 [2024-11-28 09:07:51.195732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:57.296 [2024-11-28 09:07:51.195741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:57.296 [2024-11-28 09:07:51.195748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:57.296 [2024-11-28 09:07:51.195755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:57.296 [2024-11-28 09:07:51.195762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:57.296 [2024-11-28 09:07:51.195770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:57.296 [2024-11-28 09:07:51.195777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:57.297 [2024-11-28 09:07:51.195784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:57.297 [2024-11-28 09:07:51.195791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:57.297 [2024-11-28 09:07:51.195811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:57.297 [2024-11-28 09:07:51.195821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:57.297 [2024-11-28 09:07:51.195829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:57.297 [2024-11-28 09:07:51.195837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:57.297 [2024-11-28 09:07:51.195844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:57.297 [2024-11-28 09:07:51.195851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:57.297 [2024-11-28 09:07:51.195857] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:57.297 [2024-11-28 09:07:51.195866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:57.297 [2024-11-28 09:07:51.195874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:57.297 [2024-11-28 09:07:51.195881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:57.297 [2024-11-28 09:07:51.195888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:57.297 [2024-11-28 09:07:51.195896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:57.297 [2024-11-28 09:07:51.195903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.195913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:57.297 [2024-11-28 09:07:51.195920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:23:57.297 [2024-11-28 09:07:51.195927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.220692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.220746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:57.297 [2024-11-28 09:07:51.220776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.716 ms 00:23:57.297 [2024-11-28 09:07:51.220790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.220943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.220958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:57.297 [2024-11-28 09:07:51.220981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:57.297 [2024-11-28 09:07:51.220992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.232484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.232518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:57.297 [2024-11-28 09:07:51.232530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.404 ms 00:23:57.297 [2024-11-28 09:07:51.232538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.232571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.232584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:57.297 [2024-11-28 09:07:51.232593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:57.297 [2024-11-28 09:07:51.232601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.233111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.233130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:57.297 [2024-11-28 09:07:51.233141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:23:57.297 [2024-11-28 09:07:51.233149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.233297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.233306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:57.297 [2024-11-28 09:07:51.233319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:23:57.297 [2024-11-28 09:07:51.233328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.239817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.239841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:57.297 [2024-11-28 09:07:51.239860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.467 ms 00:23:57.297 [2024-11-28 09:07:51.239869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.243217] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:57.297 [2024-11-28 09:07:51.243251] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:57.297 [2024-11-28 09:07:51.243262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.243270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:57.297 [2024-11-28 09:07:51.243279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.307 ms 00:23:57.297 [2024-11-28 09:07:51.243288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.258526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.258557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:57.297 [2024-11-28 09:07:51.258569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.196 ms 00:23:57.297 [2024-11-28 09:07:51.258577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.260992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.261015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:57.297 [2024-11-28 09:07:51.261024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.372 ms 00:23:57.297 [2024-11-28 09:07:51.261031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.262917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.262936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:57.297 [2024-11-28 09:07:51.262944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.847 ms 00:23:57.297 [2024-11-28 09:07:51.262952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.263313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.263328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:57.297 [2024-11-28 09:07:51.263341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:23:57.297 [2024-11-28 09:07:51.263349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.285208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.285256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:57.297 [2024-11-28 09:07:51.285271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.841 ms 00:23:57.297 [2024-11-28 09:07:51.285280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.293520] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:57.297 [2024-11-28 09:07:51.296686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.296715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:57.297 [2024-11-28 09:07:51.296734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.360 ms 00:23:57.297 [2024-11-28 09:07:51.296744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.296837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.296849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:57.297 [2024-11-28 09:07:51.296859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:57.297 [2024-11-28 09:07:51.296867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.296945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.296957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:57.297 [2024-11-28 09:07:51.296966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:57.297 [2024-11-28 09:07:51.296974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.296995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.297004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:57.297 [2024-11-28 09:07:51.297012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:57.297 [2024-11-28 09:07:51.297020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.297064] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:57.297 [2024-11-28 09:07:51.297078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.297085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:57.297 [2024-11-28 09:07:51.297094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:57.297 [2024-11-28 09:07:51.297102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.301625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.301657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:57.297 [2024-11-28 09:07:51.301668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.505 ms 00:23:57.297 [2024-11-28 09:07:51.301677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.301756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.297 [2024-11-28 09:07:51.301770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:57.297 [2024-11-28 09:07:51.301780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:23:57.297 [2024-11-28 09:07:51.301788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.297 [2024-11-28 09:07:51.302997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 122.063 ms, result 0 00:23:58.240  [2024-11-28T09:07:53.748Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-28T09:07:54.319Z] Copying: 34/1024 [MB] (15 MBps) [2024-11-28T09:07:55.713Z] Copying: 45/1024 [MB] (10 MBps) [2024-11-28T09:07:56.651Z] Copying: 56/1024 [MB] (11 MBps) [2024-11-28T09:07:57.595Z] Copying: 74/1024 [MB] (17 MBps) [2024-11-28T09:07:58.538Z] Copying: 88/1024 [MB] (14 MBps) [2024-11-28T09:07:59.471Z] Copying: 101/1024 [MB] (12 MBps) [2024-11-28T09:08:00.467Z] Copying: 124/1024 [MB] (23 MBps) [2024-11-28T09:08:01.434Z] Copying: 143/1024 [MB] (19 MBps) [2024-11-28T09:08:02.375Z] Copying: 155/1024 [MB] (12 MBps) [2024-11-28T09:08:03.319Z] Copying: 173/1024 [MB] (17 MBps) [2024-11-28T09:08:04.706Z] Copying: 192/1024 [MB] (19 MBps) [2024-11-28T09:08:05.672Z] Copying: 205/1024 [MB] (12 MBps) [2024-11-28T09:08:06.615Z] Copying: 222/1024 [MB] (17 MBps) [2024-11-28T09:08:07.559Z] Copying: 237/1024 [MB] (15 MBps) [2024-11-28T09:08:08.502Z] Copying: 250/1024 [MB] (12 MBps) [2024-11-28T09:08:09.447Z] Copying: 260/1024 [MB] (10 MBps) [2024-11-28T09:08:10.390Z] Copying: 272/1024 [MB] (11 MBps) [2024-11-28T09:08:11.335Z] Copying: 285/1024 [MB] (13 MBps) [2024-11-28T09:08:12.723Z] Copying: 296/1024 [MB] (10 MBps) [2024-11-28T09:08:13.667Z] Copying: 307/1024 [MB] (10 MBps) [2024-11-28T09:08:14.608Z] Copying: 320/1024 [MB] (13 MBps) [2024-11-28T09:08:15.541Z] Copying: 335/1024 [MB] (15 MBps) [2024-11-28T09:08:16.478Z] Copying: 358/1024 [MB] (22 MBps) [2024-11-28T09:08:17.422Z] Copying: 377/1024 [MB] (18 MBps) [2024-11-28T09:08:18.367Z] Copying: 389/1024 [MB] (12 MBps) [2024-11-28T09:08:19.755Z] Copying: 402/1024 [MB] (12 MBps) [2024-11-28T09:08:20.322Z] Copying: 413/1024 [MB] (11 MBps) [2024-11-28T09:08:21.697Z] Copying: 445/1024 [MB] (32 MBps) [2024-11-28T09:08:22.631Z] Copying: 471/1024 [MB] (26 MBps) [2024-11-28T09:08:23.568Z] Copying: 495/1024 [MB] (23 MBps) [2024-11-28T09:08:24.508Z] Copying: 525/1024 [MB] (29 MBps) [2024-11-28T09:08:25.451Z] Copying: 543/1024 [MB] (18 MBps) [2024-11-28T09:08:26.392Z] Copying: 557/1024 [MB] (13 MBps) [2024-11-28T09:08:27.337Z] Copying: 573/1024 [MB] (15 MBps) [2024-11-28T09:08:28.717Z] Copying: 584/1024 [MB] (11 MBps) [2024-11-28T09:08:29.674Z] Copying: 597/1024 [MB] (12 MBps) [2024-11-28T09:08:30.619Z] Copying: 626/1024 [MB] (29 MBps) [2024-11-28T09:08:31.555Z] Copying: 641/1024 [MB] (14 MBps) [2024-11-28T09:08:32.492Z] Copying: 661/1024 [MB] (19 MBps) [2024-11-28T09:08:33.434Z] Copying: 681/1024 [MB] (19 MBps) [2024-11-28T09:08:34.378Z] Copying: 696/1024 [MB] (15 MBps) [2024-11-28T09:08:35.333Z] Copying: 708/1024 [MB] (12 MBps) [2024-11-28T09:08:36.725Z] Copying: 723/1024 [MB] (14 MBps) [2024-11-28T09:08:37.670Z] Copying: 739/1024 [MB] (16 MBps) [2024-11-28T09:08:38.613Z] Copying: 756/1024 [MB] (17 MBps) [2024-11-28T09:08:39.558Z] Copying: 769/1024 [MB] (13 MBps) [2024-11-28T09:08:40.503Z] Copying: 785/1024 [MB] (16 MBps) [2024-11-28T09:08:41.449Z] Copying: 802/1024 [MB] (16 MBps) [2024-11-28T09:08:42.394Z] Copying: 814/1024 [MB] (11 MBps) [2024-11-28T09:08:43.338Z] Copying: 829/1024 [MB] (15 MBps) [2024-11-28T09:08:44.724Z] Copying: 843/1024 [MB] (13 MBps) [2024-11-28T09:08:45.660Z] Copying: 854/1024 [MB] (10 MBps) [2024-11-28T09:08:46.601Z] Copying: 884/1024 [MB] (30 MBps) [2024-11-28T09:08:47.546Z] Copying: 914/1024 [MB] (29 MBps) [2024-11-28T09:08:48.484Z] Copying: 928/1024 [MB] (14 MBps) [2024-11-28T09:08:49.417Z] Copying: 946/1024 [MB] (17 MBps) [2024-11-28T09:08:50.359Z] Copying: 984/1024 [MB] (38 MBps) [2024-11-28T09:08:51.748Z] Copying: 1009/1024 [MB] (24 MBps) [2024-11-28T09:08:52.322Z] Copying: 1020/1024 [MB] (11 MBps) [2024-11-28T09:08:52.584Z] Copying: 1048456/1048576 [kB] (3176 kBps) [2024-11-28T09:08:52.584Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-28 09:08:52.428984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.464 [2024-11-28 09:08:52.429237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:58.464 [2024-11-28 09:08:52.429269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:58.464 [2024-11-28 09:08:52.429282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.464 [2024-11-28 09:08:52.432524] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:58.464 [2024-11-28 09:08:52.434709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.464 [2024-11-28 09:08:52.434761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:58.464 [2024-11-28 09:08:52.434773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.124 ms 00:24:58.464 [2024-11-28 09:08:52.434790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.464 [2024-11-28 09:08:52.445962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.464 [2024-11-28 09:08:52.446012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:58.464 [2024-11-28 09:08:52.446024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.569 ms 00:24:58.464 [2024-11-28 09:08:52.446035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.464 [2024-11-28 09:08:52.470667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.464 [2024-11-28 09:08:52.470730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:58.464 [2024-11-28 09:08:52.470752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.613 ms 00:24:58.464 [2024-11-28 09:08:52.470761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.464 [2024-11-28 09:08:52.476921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.464 [2024-11-28 09:08:52.476976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:58.464 [2024-11-28 09:08:52.476988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.110 ms 00:24:58.464 [2024-11-28 09:08:52.476998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.464 [2024-11-28 09:08:52.480158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.464 [2024-11-28 09:08:52.480209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:58.464 [2024-11-28 09:08:52.480221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.092 ms 00:24:58.464 [2024-11-28 09:08:52.480230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.464 [2024-11-28 09:08:52.485741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.464 [2024-11-28 09:08:52.485815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:58.464 [2024-11-28 09:08:52.485828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.466 ms 00:24:58.464 [2024-11-28 09:08:52.485839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.728 [2024-11-28 09:08:52.631950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.728 [2024-11-28 09:08:52.632016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:58.728 [2024-11-28 09:08:52.632030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 146.058 ms 00:24:58.728 [2024-11-28 09:08:52.632040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.728 [2024-11-28 09:08:52.634975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.728 [2024-11-28 09:08:52.635026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:58.728 [2024-11-28 09:08:52.635038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.903 ms 00:24:58.728 [2024-11-28 09:08:52.635048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.728 [2024-11-28 09:08:52.637199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.728 [2024-11-28 09:08:52.637245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:58.728 [2024-11-28 09:08:52.637256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.105 ms 00:24:58.728 [2024-11-28 09:08:52.637265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.728 [2024-11-28 09:08:52.639251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.728 [2024-11-28 09:08:52.639300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:58.728 [2024-11-28 09:08:52.639310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.942 ms 00:24:58.728 [2024-11-28 09:08:52.639319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.728 [2024-11-28 09:08:52.641911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.728 [2024-11-28 09:08:52.641959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:58.728 [2024-11-28 09:08:52.641970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.516 ms 00:24:58.728 [2024-11-28 09:08:52.641978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.728 [2024-11-28 09:08:52.642020] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:58.728 [2024-11-28 09:08:52.642037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 96768 / 261120 wr_cnt: 1 state: open 00:24:58.728 [2024-11-28 09:08:52.642049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:58.728 [2024-11-28 09:08:52.642058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:58.728 [2024-11-28 09:08:52.642067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:58.728 [2024-11-28 09:08:52.642075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:58.728 [2024-11-28 09:08:52.642083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:58.728 [2024-11-28 09:08:52.642092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:58.728 [2024-11-28 09:08:52.642100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:58.728 [2024-11-28 09:08:52.642109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:58.729 [2024-11-28 09:08:52.642725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:58.730 [2024-11-28 09:08:52.642925] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:58.730 [2024-11-28 09:08:52.642955] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d68cafb5-7169-49f9-9dc2-5db61c434e14 00:24:58.730 [2024-11-28 09:08:52.642975] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 96768 00:24:58.730 [2024-11-28 09:08:52.642985] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 97728 00:24:58.730 [2024-11-28 09:08:52.642994] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 96768 00:24:58.730 [2024-11-28 09:08:52.643004] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0099 00:24:58.730 [2024-11-28 09:08:52.643012] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:58.730 [2024-11-28 09:08:52.643022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:58.730 [2024-11-28 09:08:52.643033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:58.730 [2024-11-28 09:08:52.643041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:58.730 [2024-11-28 09:08:52.643048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:58.730 [2024-11-28 09:08:52.643063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.730 [2024-11-28 09:08:52.643072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:58.730 [2024-11-28 09:08:52.643081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:24:58.730 [2024-11-28 09:08:52.643092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.730 [2024-11-28 09:08:52.646323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.730 [2024-11-28 09:08:52.646363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:58.730 [2024-11-28 09:08:52.646386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.209 ms 00:24:58.730 [2024-11-28 09:08:52.646395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.730 [2024-11-28 09:08:52.646555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.730 [2024-11-28 09:08:52.646567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:58.730 [2024-11-28 09:08:52.646580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:24:58.730 [2024-11-28 09:08:52.646589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.730 [2024-11-28 09:08:52.656042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.730 [2024-11-28 09:08:52.656091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:58.730 [2024-11-28 09:08:52.656103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.730 [2024-11-28 09:08:52.656113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.730 [2024-11-28 09:08:52.656171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.730 [2024-11-28 09:08:52.656181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:58.730 [2024-11-28 09:08:52.656195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.730 [2024-11-28 09:08:52.656204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.730 [2024-11-28 09:08:52.656282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.730 [2024-11-28 09:08:52.656296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:58.730 [2024-11-28 09:08:52.656305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.730 [2024-11-28 09:08:52.656314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.730 [2024-11-28 09:08:52.656330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.730 [2024-11-28 09:08:52.656339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:58.730 [2024-11-28 09:08:52.656348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.730 [2024-11-28 09:08:52.656360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.730 [2024-11-28 09:08:52.675961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.730 [2024-11-28 09:08:52.676157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:58.730 [2024-11-28 09:08:52.676170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.730 [2024-11-28 09:08:52.676180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.730 [2024-11-28 09:08:52.691281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.730 [2024-11-28 09:08:52.691340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:58.730 [2024-11-28 09:08:52.691362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.730 [2024-11-28 09:08:52.691372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.730 [2024-11-28 09:08:52.691437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.730 [2024-11-28 09:08:52.691448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:58.730 [2024-11-28 09:08:52.691458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.730 [2024-11-28 09:08:52.691467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.730 [2024-11-28 09:08:52.691511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.730 [2024-11-28 09:08:52.691522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:58.730 [2024-11-28 09:08:52.691533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.730 [2024-11-28 09:08:52.691542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.730 [2024-11-28 09:08:52.691629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.730 [2024-11-28 09:08:52.691641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:58.730 [2024-11-28 09:08:52.691650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.730 [2024-11-28 09:08:52.691664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.730 [2024-11-28 09:08:52.691700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.730 [2024-11-28 09:08:52.691716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:58.730 [2024-11-28 09:08:52.691727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.730 [2024-11-28 09:08:52.691736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.731 [2024-11-28 09:08:52.691792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.731 [2024-11-28 09:08:52.691830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:58.731 [2024-11-28 09:08:52.691840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.731 [2024-11-28 09:08:52.691849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.731 [2024-11-28 09:08:52.691919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.731 [2024-11-28 09:08:52.691970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:58.731 [2024-11-28 09:08:52.691980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.731 [2024-11-28 09:08:52.691993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.731 [2024-11-28 09:08:52.692169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 263.293 ms, result 0 00:24:59.675 00:24:59.675 00:24:59.675 09:08:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:02.237 09:08:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:02.237 [2024-11-28 09:08:55.804861] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:02.237 [2024-11-28 09:08:55.804968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91118 ] 00:25:02.237 [2024-11-28 09:08:55.949851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.237 [2024-11-28 09:08:56.008317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.237 [2024-11-28 09:08:56.147607] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:02.237 [2024-11-28 09:08:56.147707] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:02.237 [2024-11-28 09:08:56.311264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.237 [2024-11-28 09:08:56.311326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:02.237 [2024-11-28 09:08:56.311345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:02.237 [2024-11-28 09:08:56.311355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.237 [2024-11-28 09:08:56.311421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.237 [2024-11-28 09:08:56.311432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:02.237 [2024-11-28 09:08:56.311442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:02.237 [2024-11-28 09:08:56.311450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.237 [2024-11-28 09:08:56.311478] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:02.237 [2024-11-28 09:08:56.311841] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:02.237 [2024-11-28 09:08:56.311874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.237 [2024-11-28 09:08:56.311884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:02.237 [2024-11-28 09:08:56.311894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:25:02.237 [2024-11-28 09:08:56.311913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.237 [2024-11-28 09:08:56.314105] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:02.237 [2024-11-28 09:08:56.318591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.237 [2024-11-28 09:08:56.318640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:02.237 [2024-11-28 09:08:56.318653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.488 ms 00:25:02.237 [2024-11-28 09:08:56.318661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.237 [2024-11-28 09:08:56.318751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.237 [2024-11-28 09:08:56.318767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:02.237 [2024-11-28 09:08:56.318776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:02.237 [2024-11-28 09:08:56.318789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.237 [2024-11-28 09:08:56.329634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.237 [2024-11-28 09:08:56.329679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:02.237 [2024-11-28 09:08:56.329691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.770 ms 00:25:02.237 [2024-11-28 09:08:56.329707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.237 [2024-11-28 09:08:56.329839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.237 [2024-11-28 09:08:56.329851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:02.237 [2024-11-28 09:08:56.329860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:25:02.237 [2024-11-28 09:08:56.329869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.237 [2024-11-28 09:08:56.329936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.237 [2024-11-28 09:08:56.329955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:02.238 [2024-11-28 09:08:56.329964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:02.238 [2024-11-28 09:08:56.329976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.238 [2024-11-28 09:08:56.330007] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:02.238 [2024-11-28 09:08:56.332520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.238 [2024-11-28 09:08:56.332561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:02.238 [2024-11-28 09:08:56.332581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.525 ms 00:25:02.238 [2024-11-28 09:08:56.332589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.238 [2024-11-28 09:08:56.332627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.238 [2024-11-28 09:08:56.332642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:02.238 [2024-11-28 09:08:56.332655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:02.238 [2024-11-28 09:08:56.332664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.238 [2024-11-28 09:08:56.332689] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:02.238 [2024-11-28 09:08:56.332720] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:02.238 [2024-11-28 09:08:56.332763] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:02.238 [2024-11-28 09:08:56.332782] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:02.238 [2024-11-28 09:08:56.332918] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:02.238 [2024-11-28 09:08:56.332934] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:02.238 [2024-11-28 09:08:56.332948] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:02.238 [2024-11-28 09:08:56.332964] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:02.238 [2024-11-28 09:08:56.332981] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:02.238 [2024-11-28 09:08:56.332989] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:02.238 [2024-11-28 09:08:56.333000] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:02.238 [2024-11-28 09:08:56.333009] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:02.238 [2024-11-28 09:08:56.333018] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:02.238 [2024-11-28 09:08:56.333030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.238 [2024-11-28 09:08:56.333039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:02.238 [2024-11-28 09:08:56.333048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:25:02.238 [2024-11-28 09:08:56.333056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.238 [2024-11-28 09:08:56.333139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.238 [2024-11-28 09:08:56.333152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:02.238 [2024-11-28 09:08:56.333161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:02.238 [2024-11-28 09:08:56.333173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.238 [2024-11-28 09:08:56.333289] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:02.238 [2024-11-28 09:08:56.333304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:02.238 [2024-11-28 09:08:56.333316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:02.238 [2024-11-28 09:08:56.333334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:02.238 [2024-11-28 09:08:56.333344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:02.238 [2024-11-28 09:08:56.333353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:02.238 [2024-11-28 09:08:56.333361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:02.238 [2024-11-28 09:08:56.333371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:02.238 [2024-11-28 09:08:56.333381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:02.238 [2024-11-28 09:08:56.333390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:02.238 [2024-11-28 09:08:56.333399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:02.238 [2024-11-28 09:08:56.333407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:02.238 [2024-11-28 09:08:56.333417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:02.238 [2024-11-28 09:08:56.333426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:02.238 [2024-11-28 09:08:56.333436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:02.238 [2024-11-28 09:08:56.333451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:02.238 [2024-11-28 09:08:56.333464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:02.238 [2024-11-28 09:08:56.333474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:02.238 [2024-11-28 09:08:56.333483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:02.238 [2024-11-28 09:08:56.333492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:02.238 [2024-11-28 09:08:56.333500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:02.238 [2024-11-28 09:08:56.333508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:02.238 [2024-11-28 09:08:56.333516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:02.238 [2024-11-28 09:08:56.333525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:02.238 [2024-11-28 09:08:56.333533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:02.238 [2024-11-28 09:08:56.333541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:02.238 [2024-11-28 09:08:56.333549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:02.238 [2024-11-28 09:08:56.333557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:02.238 [2024-11-28 09:08:56.333563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:02.238 [2024-11-28 09:08:56.333570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:02.238 [2024-11-28 09:08:56.333577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:02.238 [2024-11-28 09:08:56.333586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:02.238 [2024-11-28 09:08:56.333613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:02.238 [2024-11-28 09:08:56.333620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:02.238 [2024-11-28 09:08:56.333627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:02.238 [2024-11-28 09:08:56.333635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:02.238 [2024-11-28 09:08:56.333642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:02.238 [2024-11-28 09:08:56.333649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:02.238 [2024-11-28 09:08:56.333657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:02.238 [2024-11-28 09:08:56.333665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:02.238 [2024-11-28 09:08:56.333672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:02.238 [2024-11-28 09:08:56.333679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:02.238 [2024-11-28 09:08:56.333686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:02.238 [2024-11-28 09:08:56.333695] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:02.238 [2024-11-28 09:08:56.333704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:02.238 [2024-11-28 09:08:56.333713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:02.238 [2024-11-28 09:08:56.333725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:02.238 [2024-11-28 09:08:56.333741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:02.238 [2024-11-28 09:08:56.333753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:02.238 [2024-11-28 09:08:56.333761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:02.238 [2024-11-28 09:08:56.333768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:02.238 [2024-11-28 09:08:56.333774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:02.238 [2024-11-28 09:08:56.333781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:02.238 [2024-11-28 09:08:56.333791] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:02.238 [2024-11-28 09:08:56.333816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:02.238 [2024-11-28 09:08:56.333826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:02.238 [2024-11-28 09:08:56.333833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:02.238 [2024-11-28 09:08:56.333841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:02.238 [2024-11-28 09:08:56.333849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:02.238 [2024-11-28 09:08:56.333858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:02.238 [2024-11-28 09:08:56.333866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:02.238 [2024-11-28 09:08:56.333874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:02.238 [2024-11-28 09:08:56.333884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:02.238 [2024-11-28 09:08:56.333895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:02.238 [2024-11-28 09:08:56.333904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:02.238 [2024-11-28 09:08:56.333911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:02.238 [2024-11-28 09:08:56.333919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:02.239 [2024-11-28 09:08:56.333927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:02.239 [2024-11-28 09:08:56.333935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:02.239 [2024-11-28 09:08:56.333944] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:02.239 [2024-11-28 09:08:56.333953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:02.239 [2024-11-28 09:08:56.333962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:02.239 [2024-11-28 09:08:56.333970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:02.239 [2024-11-28 09:08:56.333979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:02.239 [2024-11-28 09:08:56.333989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:02.239 [2024-11-28 09:08:56.333997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.239 [2024-11-28 09:08:56.334006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:02.239 [2024-11-28 09:08:56.334014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:25:02.239 [2024-11-28 09:08:56.334022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.366877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.366974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:02.502 [2024-11-28 09:08:56.367007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.783 ms 00:25:02.502 [2024-11-28 09:08:56.367029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.367278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.367393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:02.502 [2024-11-28 09:08:56.367419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:25:02.502 [2024-11-28 09:08:56.367440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.383236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.383285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:02.502 [2024-11-28 09:08:56.383297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.647 ms 00:25:02.502 [2024-11-28 09:08:56.383306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.383355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.383365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:02.502 [2024-11-28 09:08:56.383374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:02.502 [2024-11-28 09:08:56.383383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.384115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.384162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:02.502 [2024-11-28 09:08:56.384178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:25:02.502 [2024-11-28 09:08:56.384188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.384352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.384363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:02.502 [2024-11-28 09:08:56.384372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:25:02.502 [2024-11-28 09:08:56.384381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.393700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.393742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:02.502 [2024-11-28 09:08:56.393762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.292 ms 00:25:02.502 [2024-11-28 09:08:56.393776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.398483] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:02.502 [2024-11-28 09:08:56.398541] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:02.502 [2024-11-28 09:08:56.398554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.398564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:02.502 [2024-11-28 09:08:56.398578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.645 ms 00:25:02.502 [2024-11-28 09:08:56.398586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.414794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.414851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:02.502 [2024-11-28 09:08:56.414871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.150 ms 00:25:02.502 [2024-11-28 09:08:56.414880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.418062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.418109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:02.502 [2024-11-28 09:08:56.418120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.125 ms 00:25:02.502 [2024-11-28 09:08:56.418128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.420765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.420827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:02.502 [2024-11-28 09:08:56.420838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.588 ms 00:25:02.502 [2024-11-28 09:08:56.420847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.421216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.421242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:02.502 [2024-11-28 09:08:56.421256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:25:02.502 [2024-11-28 09:08:56.421265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.451744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.451822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:02.502 [2024-11-28 09:08:56.451837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.454 ms 00:25:02.502 [2024-11-28 09:08:56.451847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.461008] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:02.502 [2024-11-28 09:08:56.464656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.464698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:02.502 [2024-11-28 09:08:56.464729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.755 ms 00:25:02.502 [2024-11-28 09:08:56.464738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.464839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.464857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:02.502 [2024-11-28 09:08:56.464870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:02.502 [2024-11-28 09:08:56.464880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.467225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.467278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:02.502 [2024-11-28 09:08:56.467291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.298 ms 00:25:02.502 [2024-11-28 09:08:56.467305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.467347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.467357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:02.502 [2024-11-28 09:08:56.467367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:02.502 [2024-11-28 09:08:56.467380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.467428] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:02.502 [2024-11-28 09:08:56.467451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.467461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:02.502 [2024-11-28 09:08:56.467470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:02.502 [2024-11-28 09:08:56.467479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.473881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.473931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:02.502 [2024-11-28 09:08:56.473944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.377 ms 00:25:02.502 [2024-11-28 09:08:56.473953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.502 [2024-11-28 09:08:56.474063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.502 [2024-11-28 09:08:56.474079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:02.503 [2024-11-28 09:08:56.474094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:25:02.503 [2024-11-28 09:08:56.474109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.503 [2024-11-28 09:08:56.475591] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 163.750 ms, result 0 00:25:03.885  [2024-11-28T09:08:59.041Z] Copying: 1104/1048576 [kB] (1104 kBps) [2024-11-28T09:08:59.986Z] Copying: 4204/1048576 [kB] (3100 kBps) [2024-11-28T09:09:00.927Z] Copying: 18/1024 [MB] (14 MBps) [2024-11-28T09:09:01.870Z] Copying: 46/1024 [MB] (27 MBps) [2024-11-28T09:09:02.809Z] Copying: 85/1024 [MB] (39 MBps) [2024-11-28T09:09:03.750Z] Copying: 105/1024 [MB] (20 MBps) [2024-11-28T09:09:04.692Z] Copying: 135/1024 [MB] (29 MBps) [2024-11-28T09:09:06.069Z] Copying: 167/1024 [MB] (32 MBps) [2024-11-28T09:09:07.006Z] Copying: 195/1024 [MB] (27 MBps) [2024-11-28T09:09:07.951Z] Copying: 232/1024 [MB] (37 MBps) [2024-11-28T09:09:08.889Z] Copying: 255/1024 [MB] (23 MBps) [2024-11-28T09:09:09.833Z] Copying: 290/1024 [MB] (34 MBps) [2024-11-28T09:09:10.777Z] Copying: 318/1024 [MB] (28 MBps) [2024-11-28T09:09:11.713Z] Copying: 342/1024 [MB] (23 MBps) [2024-11-28T09:09:13.097Z] Copying: 390/1024 [MB] (48 MBps) [2024-11-28T09:09:13.669Z] Copying: 431/1024 [MB] (40 MBps) [2024-11-28T09:09:15.054Z] Copying: 461/1024 [MB] (29 MBps) [2024-11-28T09:09:15.989Z] Copying: 495/1024 [MB] (33 MBps) [2024-11-28T09:09:16.930Z] Copying: 532/1024 [MB] (37 MBps) [2024-11-28T09:09:17.867Z] Copying: 559/1024 [MB] (26 MBps) [2024-11-28T09:09:18.813Z] Copying: 607/1024 [MB] (48 MBps) [2024-11-28T09:09:19.754Z] Copying: 631/1024 [MB] (24 MBps) [2024-11-28T09:09:20.696Z] Copying: 663/1024 [MB] (32 MBps) [2024-11-28T09:09:22.082Z] Copying: 698/1024 [MB] (34 MBps) [2024-11-28T09:09:23.028Z] Copying: 729/1024 [MB] (30 MBps) [2024-11-28T09:09:23.961Z] Copying: 757/1024 [MB] (28 MBps) [2024-11-28T09:09:24.902Z] Copying: 787/1024 [MB] (29 MBps) [2024-11-28T09:09:25.866Z] Copying: 817/1024 [MB] (29 MBps) [2024-11-28T09:09:26.809Z] Copying: 845/1024 [MB] (28 MBps) [2024-11-28T09:09:27.838Z] Copying: 876/1024 [MB] (30 MBps) [2024-11-28T09:09:28.809Z] Copying: 903/1024 [MB] (27 MBps) [2024-11-28T09:09:29.753Z] Copying: 936/1024 [MB] (33 MBps) [2024-11-28T09:09:30.698Z] Copying: 975/1024 [MB] (39 MBps) [2024-11-28T09:09:31.644Z] Copying: 1004/1024 [MB] (29 MBps) [2024-11-28T09:09:31.644Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-28 09:09:31.457958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.524 [2024-11-28 09:09:31.458068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:37.524 [2024-11-28 09:09:31.458087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:37.524 [2024-11-28 09:09:31.458098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.524 [2024-11-28 09:09:31.458207] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:37.524 [2024-11-28 09:09:31.459338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.524 [2024-11-28 09:09:31.459384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:37.524 [2024-11-28 09:09:31.459407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:25:37.524 [2024-11-28 09:09:31.459423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.524 [2024-11-28 09:09:31.459687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.525 [2024-11-28 09:09:31.459719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:37.525 [2024-11-28 09:09:31.459730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:25:37.525 [2024-11-28 09:09:31.459740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.525 [2024-11-28 09:09:31.473514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.525 [2024-11-28 09:09:31.473575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:37.525 [2024-11-28 09:09:31.473589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.752 ms 00:25:37.525 [2024-11-28 09:09:31.473622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.525 [2024-11-28 09:09:31.479866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.525 [2024-11-28 09:09:31.479910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:37.525 [2024-11-28 09:09:31.479922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.203 ms 00:25:37.525 [2024-11-28 09:09:31.479932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.525 [2024-11-28 09:09:31.483038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.525 [2024-11-28 09:09:31.483091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:37.525 [2024-11-28 09:09:31.483102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.038 ms 00:25:37.525 [2024-11-28 09:09:31.483110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.525 [2024-11-28 09:09:31.488219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.525 [2024-11-28 09:09:31.488271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:37.525 [2024-11-28 09:09:31.488283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.066 ms 00:25:37.525 [2024-11-28 09:09:31.488303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.525 [2024-11-28 09:09:31.493364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.525 [2024-11-28 09:09:31.493423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:37.525 [2024-11-28 09:09:31.493434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.012 ms 00:25:37.525 [2024-11-28 09:09:31.493445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.525 [2024-11-28 09:09:31.496955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.525 [2024-11-28 09:09:31.497003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:37.525 [2024-11-28 09:09:31.497016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.492 ms 00:25:37.525 [2024-11-28 09:09:31.497025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.525 [2024-11-28 09:09:31.499661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.525 [2024-11-28 09:09:31.499724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:37.525 [2024-11-28 09:09:31.499735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.591 ms 00:25:37.525 [2024-11-28 09:09:31.499743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.525 [2024-11-28 09:09:31.501370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.525 [2024-11-28 09:09:31.501416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:37.525 [2024-11-28 09:09:31.501426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.584 ms 00:25:37.525 [2024-11-28 09:09:31.501435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.525 [2024-11-28 09:09:31.503009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.525 [2024-11-28 09:09:31.503055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:37.525 [2024-11-28 09:09:31.503065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.499 ms 00:25:37.525 [2024-11-28 09:09:31.503073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.525 [2024-11-28 09:09:31.503111] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:37.525 [2024-11-28 09:09:31.503139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:37.525 [2024-11-28 09:09:31.503152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:37.525 [2024-11-28 09:09:31.503162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:37.525 [2024-11-28 09:09:31.503451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.503996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:37.526 [2024-11-28 09:09:31.504013] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:37.526 [2024-11-28 09:09:31.504022] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d68cafb5-7169-49f9-9dc2-5db61c434e14 00:25:37.526 [2024-11-28 09:09:31.504033] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:37.526 [2024-11-28 09:09:31.504063] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 167872 00:25:37.526 [2024-11-28 09:09:31.504072] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 165888 00:25:37.527 [2024-11-28 09:09:31.504082] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0120 00:25:37.527 [2024-11-28 09:09:31.504099] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:37.527 [2024-11-28 09:09:31.504108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:37.527 [2024-11-28 09:09:31.504121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:37.527 [2024-11-28 09:09:31.504128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:37.527 [2024-11-28 09:09:31.504135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:37.527 [2024-11-28 09:09:31.504144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.527 [2024-11-28 09:09:31.504152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:37.527 [2024-11-28 09:09:31.504161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:25:37.527 [2024-11-28 09:09:31.504169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.507347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.527 [2024-11-28 09:09:31.507393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:37.527 [2024-11-28 09:09:31.507404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.160 ms 00:25:37.527 [2024-11-28 09:09:31.507415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.507567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.527 [2024-11-28 09:09:31.507584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:37.527 [2024-11-28 09:09:31.507593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:25:37.527 [2024-11-28 09:09:31.507608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.516720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.527 [2024-11-28 09:09:31.516771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:37.527 [2024-11-28 09:09:31.516788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.527 [2024-11-28 09:09:31.516812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.516874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.527 [2024-11-28 09:09:31.516884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:37.527 [2024-11-28 09:09:31.516893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.527 [2024-11-28 09:09:31.516906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.516972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.527 [2024-11-28 09:09:31.516990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:37.527 [2024-11-28 09:09:31.516999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.527 [2024-11-28 09:09:31.517008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.517024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.527 [2024-11-28 09:09:31.517034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:37.527 [2024-11-28 09:09:31.517043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.527 [2024-11-28 09:09:31.517051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.536374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.527 [2024-11-28 09:09:31.536435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:37.527 [2024-11-28 09:09:31.536447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.527 [2024-11-28 09:09:31.536456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.551861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.527 [2024-11-28 09:09:31.551923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:37.527 [2024-11-28 09:09:31.551938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.527 [2024-11-28 09:09:31.551947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.552024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.527 [2024-11-28 09:09:31.552035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:37.527 [2024-11-28 09:09:31.552044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.527 [2024-11-28 09:09:31.552054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.552099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.527 [2024-11-28 09:09:31.552110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:37.527 [2024-11-28 09:09:31.552120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.527 [2024-11-28 09:09:31.552129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.552219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.527 [2024-11-28 09:09:31.552242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:37.527 [2024-11-28 09:09:31.552252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.527 [2024-11-28 09:09:31.552261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.552296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.527 [2024-11-28 09:09:31.552306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:37.527 [2024-11-28 09:09:31.552319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.527 [2024-11-28 09:09:31.552328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.552379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.527 [2024-11-28 09:09:31.552392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:37.527 [2024-11-28 09:09:31.552405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.527 [2024-11-28 09:09:31.552413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.552472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.527 [2024-11-28 09:09:31.552493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:37.527 [2024-11-28 09:09:31.552506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.527 [2024-11-28 09:09:31.552515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.527 [2024-11-28 09:09:31.552686] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 94.686 ms, result 0 00:25:37.789 00:25:37.789 00:25:37.789 09:09:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:40.336 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:40.336 09:09:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:40.336 [2024-11-28 09:09:34.058494] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:40.336 [2024-11-28 09:09:34.058624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91507 ] 00:25:40.336 [2024-11-28 09:09:34.209485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.336 [2024-11-28 09:09:34.261691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.336 [2024-11-28 09:09:34.400447] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:40.336 [2024-11-28 09:09:34.400533] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:40.599 [2024-11-28 09:09:34.563894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.599 [2024-11-28 09:09:34.563954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:40.599 [2024-11-28 09:09:34.563974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:40.599 [2024-11-28 09:09:34.563983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.599 [2024-11-28 09:09:34.564058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.599 [2024-11-28 09:09:34.564070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:40.599 [2024-11-28 09:09:34.564080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:40.599 [2024-11-28 09:09:34.564095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.599 [2024-11-28 09:09:34.564117] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:40.599 [2024-11-28 09:09:34.564524] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:40.599 [2024-11-28 09:09:34.564571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.599 [2024-11-28 09:09:34.564582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:40.599 [2024-11-28 09:09:34.564594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:25:40.599 [2024-11-28 09:09:34.564606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.599 [2024-11-28 09:09:34.567369] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:40.599 [2024-11-28 09:09:34.571868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.599 [2024-11-28 09:09:34.571912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:40.599 [2024-11-28 09:09:34.571924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.502 ms 00:25:40.599 [2024-11-28 09:09:34.571941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.599 [2024-11-28 09:09:34.572021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.599 [2024-11-28 09:09:34.572035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:40.599 [2024-11-28 09:09:34.572045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:40.599 [2024-11-28 09:09:34.572053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.599 [2024-11-28 09:09:34.583424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.599 [2024-11-28 09:09:34.583466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:40.599 [2024-11-28 09:09:34.583483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.327 ms 00:25:40.599 [2024-11-28 09:09:34.583496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.599 [2024-11-28 09:09:34.583608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.599 [2024-11-28 09:09:34.583619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:40.599 [2024-11-28 09:09:34.583629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:40.599 [2024-11-28 09:09:34.583640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.599 [2024-11-28 09:09:34.583699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.599 [2024-11-28 09:09:34.583713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:40.599 [2024-11-28 09:09:34.583722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:40.599 [2024-11-28 09:09:34.583731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.599 [2024-11-28 09:09:34.583759] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:40.599 [2024-11-28 09:09:34.586464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.599 [2024-11-28 09:09:34.586502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:40.599 [2024-11-28 09:09:34.586513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.716 ms 00:25:40.599 [2024-11-28 09:09:34.586521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.599 [2024-11-28 09:09:34.586557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.599 [2024-11-28 09:09:34.586566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:40.599 [2024-11-28 09:09:34.586575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:40.599 [2024-11-28 09:09:34.586583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.599 [2024-11-28 09:09:34.586614] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:40.599 [2024-11-28 09:09:34.586641] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:40.599 [2024-11-28 09:09:34.586683] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:40.599 [2024-11-28 09:09:34.586709] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:40.599 [2024-11-28 09:09:34.586845] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:40.599 [2024-11-28 09:09:34.586860] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:40.600 [2024-11-28 09:09:34.586872] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:40.600 [2024-11-28 09:09:34.586884] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:40.600 [2024-11-28 09:09:34.586898] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:40.600 [2024-11-28 09:09:34.586907] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:40.600 [2024-11-28 09:09:34.586915] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:40.600 [2024-11-28 09:09:34.586926] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:40.600 [2024-11-28 09:09:34.586936] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:40.600 [2024-11-28 09:09:34.586945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.600 [2024-11-28 09:09:34.586954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:40.600 [2024-11-28 09:09:34.586962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:25:40.600 [2024-11-28 09:09:34.586971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.600 [2024-11-28 09:09:34.587057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.600 [2024-11-28 09:09:34.587070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:40.600 [2024-11-28 09:09:34.587079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:40.600 [2024-11-28 09:09:34.587088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.600 [2024-11-28 09:09:34.587192] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:40.600 [2024-11-28 09:09:34.587215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:40.600 [2024-11-28 09:09:34.587226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:40.600 [2024-11-28 09:09:34.587245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.600 [2024-11-28 09:09:34.587254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:40.600 [2024-11-28 09:09:34.587263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:40.600 [2024-11-28 09:09:34.587271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:40.600 [2024-11-28 09:09:34.587281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:40.600 [2024-11-28 09:09:34.587291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:40.600 [2024-11-28 09:09:34.587299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:40.600 [2024-11-28 09:09:34.587311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:40.600 [2024-11-28 09:09:34.587320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:40.600 [2024-11-28 09:09:34.587329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:40.600 [2024-11-28 09:09:34.587338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:40.600 [2024-11-28 09:09:34.587353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:40.600 [2024-11-28 09:09:34.587362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.600 [2024-11-28 09:09:34.587373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:40.600 [2024-11-28 09:09:34.587382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:40.600 [2024-11-28 09:09:34.587389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.600 [2024-11-28 09:09:34.587398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:40.600 [2024-11-28 09:09:34.587406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:40.600 [2024-11-28 09:09:34.587416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:40.600 [2024-11-28 09:09:34.587424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:40.600 [2024-11-28 09:09:34.587432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:40.600 [2024-11-28 09:09:34.587440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:40.600 [2024-11-28 09:09:34.587448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:40.600 [2024-11-28 09:09:34.587461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:40.600 [2024-11-28 09:09:34.587471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:40.600 [2024-11-28 09:09:34.587478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:40.600 [2024-11-28 09:09:34.587486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:40.600 [2024-11-28 09:09:34.587492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:40.600 [2024-11-28 09:09:34.587500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:40.600 [2024-11-28 09:09:34.587508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:40.600 [2024-11-28 09:09:34.587515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:40.600 [2024-11-28 09:09:34.587522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:40.600 [2024-11-28 09:09:34.587529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:40.600 [2024-11-28 09:09:34.587535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:40.600 [2024-11-28 09:09:34.587541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:40.600 [2024-11-28 09:09:34.587548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:40.600 [2024-11-28 09:09:34.587554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.600 [2024-11-28 09:09:34.587561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:40.600 [2024-11-28 09:09:34.587569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:40.600 [2024-11-28 09:09:34.587581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.600 [2024-11-28 09:09:34.587588] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:40.600 [2024-11-28 09:09:34.587600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:40.600 [2024-11-28 09:09:34.587612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:40.600 [2024-11-28 09:09:34.587626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.600 [2024-11-28 09:09:34.587635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:40.600 [2024-11-28 09:09:34.587643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:40.600 [2024-11-28 09:09:34.587650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:40.600 [2024-11-28 09:09:34.587657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:40.600 [2024-11-28 09:09:34.587665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:40.600 [2024-11-28 09:09:34.587673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:40.600 [2024-11-28 09:09:34.587682] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:40.600 [2024-11-28 09:09:34.587692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:40.600 [2024-11-28 09:09:34.587701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:40.600 [2024-11-28 09:09:34.587708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:40.600 [2024-11-28 09:09:34.587716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:40.600 [2024-11-28 09:09:34.587726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:40.600 [2024-11-28 09:09:34.587734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:40.600 [2024-11-28 09:09:34.587741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:40.600 [2024-11-28 09:09:34.587748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:40.600 [2024-11-28 09:09:34.587756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:40.600 [2024-11-28 09:09:34.587763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:40.600 [2024-11-28 09:09:34.587771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:40.600 [2024-11-28 09:09:34.587780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:40.600 [2024-11-28 09:09:34.587788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:40.600 [2024-11-28 09:09:34.587824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:40.600 [2024-11-28 09:09:34.587835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:40.600 [2024-11-28 09:09:34.587843] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:40.600 [2024-11-28 09:09:34.587855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:40.600 [2024-11-28 09:09:34.587863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:40.600 [2024-11-28 09:09:34.587872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:40.600 [2024-11-28 09:09:34.587880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:40.600 [2024-11-28 09:09:34.587891] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:40.600 [2024-11-28 09:09:34.587903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.600 [2024-11-28 09:09:34.587912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:40.600 [2024-11-28 09:09:34.587921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:25:40.600 [2024-11-28 09:09:34.587931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.600 [2024-11-28 09:09:34.615827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.600 [2024-11-28 09:09:34.615887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:40.600 [2024-11-28 09:09:34.615904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.823 ms 00:25:40.600 [2024-11-28 09:09:34.615916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.600 [2024-11-28 09:09:34.616038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.616052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:40.601 [2024-11-28 09:09:34.616066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:25:40.601 [2024-11-28 09:09:34.616086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.631994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.632044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:40.601 [2024-11-28 09:09:34.632057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.828 ms 00:25:40.601 [2024-11-28 09:09:34.632066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.632109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.632118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:40.601 [2024-11-28 09:09:34.632128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:40.601 [2024-11-28 09:09:34.632140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.632864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.632902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:40.601 [2024-11-28 09:09:34.632915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:25:40.601 [2024-11-28 09:09:34.632925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.633091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.633104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:40.601 [2024-11-28 09:09:34.633113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:25:40.601 [2024-11-28 09:09:34.633123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.642625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.642671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:40.601 [2024-11-28 09:09:34.642689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.474 ms 00:25:40.601 [2024-11-28 09:09:34.642698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.647148] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:40.601 [2024-11-28 09:09:34.647198] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:40.601 [2024-11-28 09:09:34.647213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.647223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:40.601 [2024-11-28 09:09:34.647236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.370 ms 00:25:40.601 [2024-11-28 09:09:34.647244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.663410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.663457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:40.601 [2024-11-28 09:09:34.663473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.102 ms 00:25:40.601 [2024-11-28 09:09:34.663482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.666704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.666751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:40.601 [2024-11-28 09:09:34.666762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.163 ms 00:25:40.601 [2024-11-28 09:09:34.666769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.669243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.669285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:40.601 [2024-11-28 09:09:34.669296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.414 ms 00:25:40.601 [2024-11-28 09:09:34.669304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.669705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.669772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:40.601 [2024-11-28 09:09:34.669784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:25:40.601 [2024-11-28 09:09:34.669792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.699218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.699282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:40.601 [2024-11-28 09:09:34.699295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.383 ms 00:25:40.601 [2024-11-28 09:09:34.699304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.708053] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:40.601 [2024-11-28 09:09:34.711464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.711522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:40.601 [2024-11-28 09:09:34.711542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.102 ms 00:25:40.601 [2024-11-28 09:09:34.711552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.711629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.711642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:40.601 [2024-11-28 09:09:34.711654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:40.601 [2024-11-28 09:09:34.711663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.712759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.712835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:40.601 [2024-11-28 09:09:34.712848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:25:40.601 [2024-11-28 09:09:34.712863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.712896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.712906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:40.601 [2024-11-28 09:09:34.712916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:40.601 [2024-11-28 09:09:34.712925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.601 [2024-11-28 09:09:34.712972] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:40.601 [2024-11-28 09:09:34.712985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.601 [2024-11-28 09:09:34.712994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:40.601 [2024-11-28 09:09:34.713007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:40.601 [2024-11-28 09:09:34.713017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.864 [2024-11-28 09:09:34.719328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.864 [2024-11-28 09:09:34.719377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:40.864 [2024-11-28 09:09:34.719389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.282 ms 00:25:40.864 [2024-11-28 09:09:34.719398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.864 [2024-11-28 09:09:34.719496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.864 [2024-11-28 09:09:34.719508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:40.864 [2024-11-28 09:09:34.719518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:40.864 [2024-11-28 09:09:34.719527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.864 [2024-11-28 09:09:34.720941] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.453 ms, result 0 00:25:41.806  [2024-11-28T09:09:37.311Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-28T09:09:38.252Z] Copying: 37/1024 [MB] (18 MBps) [2024-11-28T09:09:39.198Z] Copying: 57/1024 [MB] (20 MBps) [2024-11-28T09:09:40.140Z] Copying: 69/1024 [MB] (12 MBps) [2024-11-28T09:09:41.081Z] Copying: 86/1024 [MB] (16 MBps) [2024-11-28T09:09:42.025Z] Copying: 104/1024 [MB] (18 MBps) [2024-11-28T09:09:42.969Z] Copying: 118/1024 [MB] (13 MBps) [2024-11-28T09:09:43.910Z] Copying: 136/1024 [MB] (17 MBps) [2024-11-28T09:09:45.295Z] Copying: 159/1024 [MB] (22 MBps) [2024-11-28T09:09:46.238Z] Copying: 169/1024 [MB] (10 MBps) [2024-11-28T09:09:47.183Z] Copying: 183/1024 [MB] (14 MBps) [2024-11-28T09:09:48.118Z] Copying: 194/1024 [MB] (10 MBps) [2024-11-28T09:09:49.054Z] Copying: 209/1024 [MB] (15 MBps) [2024-11-28T09:09:50.002Z] Copying: 228/1024 [MB] (18 MBps) [2024-11-28T09:09:50.943Z] Copying: 239/1024 [MB] (10 MBps) [2024-11-28T09:09:52.329Z] Copying: 260/1024 [MB] (20 MBps) [2024-11-28T09:09:52.902Z] Copying: 273/1024 [MB] (13 MBps) [2024-11-28T09:09:54.289Z] Copying: 286/1024 [MB] (13 MBps) [2024-11-28T09:09:55.245Z] Copying: 297/1024 [MB] (10 MBps) [2024-11-28T09:09:56.195Z] Copying: 309/1024 [MB] (11 MBps) [2024-11-28T09:09:57.164Z] Copying: 329/1024 [MB] (20 MBps) [2024-11-28T09:09:58.104Z] Copying: 345/1024 [MB] (16 MBps) [2024-11-28T09:09:59.047Z] Copying: 366/1024 [MB] (21 MBps) [2024-11-28T09:09:59.989Z] Copying: 384/1024 [MB] (18 MBps) [2024-11-28T09:10:00.921Z] Copying: 396/1024 [MB] (11 MBps) [2024-11-28T09:10:02.300Z] Copying: 413/1024 [MB] (17 MBps) [2024-11-28T09:10:03.242Z] Copying: 431/1024 [MB] (17 MBps) [2024-11-28T09:10:04.186Z] Copying: 441/1024 [MB] (10 MBps) [2024-11-28T09:10:05.130Z] Copying: 451/1024 [MB] (10 MBps) [2024-11-28T09:10:06.064Z] Copying: 462/1024 [MB] (10 MBps) [2024-11-28T09:10:06.994Z] Copying: 476/1024 [MB] (13 MBps) [2024-11-28T09:10:07.928Z] Copying: 493/1024 [MB] (16 MBps) [2024-11-28T09:10:09.313Z] Copying: 514/1024 [MB] (21 MBps) [2024-11-28T09:10:10.251Z] Copying: 526/1024 [MB] (11 MBps) [2024-11-28T09:10:11.190Z] Copying: 540/1024 [MB] (13 MBps) [2024-11-28T09:10:12.125Z] Copying: 553/1024 [MB] (12 MBps) [2024-11-28T09:10:13.057Z] Copying: 567/1024 [MB] (14 MBps) [2024-11-28T09:10:13.993Z] Copying: 583/1024 [MB] (16 MBps) [2024-11-28T09:10:14.935Z] Copying: 597/1024 [MB] (13 MBps) [2024-11-28T09:10:16.313Z] Copying: 607/1024 [MB] (10 MBps) [2024-11-28T09:10:17.255Z] Copying: 621/1024 [MB] (14 MBps) [2024-11-28T09:10:18.194Z] Copying: 641/1024 [MB] (19 MBps) [2024-11-28T09:10:19.128Z] Copying: 653/1024 [MB] (11 MBps) [2024-11-28T09:10:20.072Z] Copying: 672/1024 [MB] (18 MBps) [2024-11-28T09:10:21.013Z] Copying: 684/1024 [MB] (12 MBps) [2024-11-28T09:10:21.958Z] Copying: 696/1024 [MB] (11 MBps) [2024-11-28T09:10:23.338Z] Copying: 713/1024 [MB] (16 MBps) [2024-11-28T09:10:23.911Z] Copying: 730/1024 [MB] (17 MBps) [2024-11-28T09:10:25.004Z] Copying: 752/1024 [MB] (21 MBps) [2024-11-28T09:10:25.953Z] Copying: 771/1024 [MB] (19 MBps) [2024-11-28T09:10:27.338Z] Copying: 786/1024 [MB] (14 MBps) [2024-11-28T09:10:27.911Z] Copying: 803/1024 [MB] (16 MBps) [2024-11-28T09:10:29.298Z] Copying: 817/1024 [MB] (14 MBps) [2024-11-28T09:10:30.241Z] Copying: 839/1024 [MB] (21 MBps) [2024-11-28T09:10:31.184Z] Copying: 857/1024 [MB] (17 MBps) [2024-11-28T09:10:32.128Z] Copying: 875/1024 [MB] (18 MBps) [2024-11-28T09:10:33.069Z] Copying: 894/1024 [MB] (19 MBps) [2024-11-28T09:10:34.010Z] Copying: 908/1024 [MB] (13 MBps) [2024-11-28T09:10:34.954Z] Copying: 930/1024 [MB] (21 MBps) [2024-11-28T09:10:36.337Z] Copying: 951/1024 [MB] (21 MBps) [2024-11-28T09:10:36.904Z] Copying: 971/1024 [MB] (19 MBps) [2024-11-28T09:10:38.292Z] Copying: 992/1024 [MB] (21 MBps) [2024-11-28T09:10:38.867Z] Copying: 1010/1024 [MB] (17 MBps) [2024-11-28T09:10:38.867Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-28 09:10:38.671340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.747 [2024-11-28 09:10:38.671588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:44.747 [2024-11-28 09:10:38.671613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:44.747 [2024-11-28 09:10:38.671624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.747 [2024-11-28 09:10:38.671662] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:44.747 [2024-11-28 09:10:38.672716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.747 [2024-11-28 09:10:38.672765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:44.747 [2024-11-28 09:10:38.672792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:26:44.747 [2024-11-28 09:10:38.672826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.747 [2024-11-28 09:10:38.673130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.747 [2024-11-28 09:10:38.673145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:44.747 [2024-11-28 09:10:38.673157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:26:44.747 [2024-11-28 09:10:38.673174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.747 [2024-11-28 09:10:38.678316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.747 [2024-11-28 09:10:38.678392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:44.747 [2024-11-28 09:10:38.678408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.121 ms 00:26:44.747 [2024-11-28 09:10:38.678419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.747 [2024-11-28 09:10:38.686701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.747 [2024-11-28 09:10:38.686756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:44.747 [2024-11-28 09:10:38.686768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.251 ms 00:26:44.747 [2024-11-28 09:10:38.686778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.747 [2024-11-28 09:10:38.690235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.747 [2024-11-28 09:10:38.690294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:44.747 [2024-11-28 09:10:38.690305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.367 ms 00:26:44.747 [2024-11-28 09:10:38.690314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.747 [2024-11-28 09:10:38.696024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.747 [2024-11-28 09:10:38.696083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:44.747 [2024-11-28 09:10:38.696096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.653 ms 00:26:44.747 [2024-11-28 09:10:38.696104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.747 [2024-11-28 09:10:38.700936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.747 [2024-11-28 09:10:38.700984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:44.747 [2024-11-28 09:10:38.701004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.778 ms 00:26:44.747 [2024-11-28 09:10:38.701013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.747 [2024-11-28 09:10:38.704602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.747 [2024-11-28 09:10:38.704662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:44.747 [2024-11-28 09:10:38.704673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.569 ms 00:26:44.747 [2024-11-28 09:10:38.704681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.747 [2024-11-28 09:10:38.707362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.747 [2024-11-28 09:10:38.707419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:44.747 [2024-11-28 09:10:38.707429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.615 ms 00:26:44.747 [2024-11-28 09:10:38.707436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.747 [2024-11-28 09:10:38.709274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.747 [2024-11-28 09:10:38.709351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:44.747 [2024-11-28 09:10:38.709362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:26:44.747 [2024-11-28 09:10:38.709370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.748 [2024-11-28 09:10:38.711288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.748 [2024-11-28 09:10:38.711346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:44.748 [2024-11-28 09:10:38.711357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.837 ms 00:26:44.748 [2024-11-28 09:10:38.711365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.748 [2024-11-28 09:10:38.711407] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:44.748 [2024-11-28 09:10:38.711446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:44.748 [2024-11-28 09:10:38.711459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:44.748 [2024-11-28 09:10:38.711468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.711991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:44.748 [2024-11-28 09:10:38.712140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:44.749 [2024-11-28 09:10:38.712287] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:44.749 [2024-11-28 09:10:38.712295] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d68cafb5-7169-49f9-9dc2-5db61c434e14 00:26:44.749 [2024-11-28 09:10:38.712304] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:44.749 [2024-11-28 09:10:38.712313] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:44.749 [2024-11-28 09:10:38.712321] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:44.749 [2024-11-28 09:10:38.712330] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:44.749 [2024-11-28 09:10:38.712338] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:44.749 [2024-11-28 09:10:38.712347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:44.749 [2024-11-28 09:10:38.712356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:44.749 [2024-11-28 09:10:38.712363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:44.749 [2024-11-28 09:10:38.712371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:44.749 [2024-11-28 09:10:38.712379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.749 [2024-11-28 09:10:38.712387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:44.749 [2024-11-28 09:10:38.712405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:26:44.749 [2024-11-28 09:10:38.712415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.715739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.749 [2024-11-28 09:10:38.715784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:44.749 [2024-11-28 09:10:38.715814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.295 ms 00:26:44.749 [2024-11-28 09:10:38.715826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.715997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.749 [2024-11-28 09:10:38.716008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:44.749 [2024-11-28 09:10:38.716017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:26:44.749 [2024-11-28 09:10:38.716026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.725329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.749 [2024-11-28 09:10:38.725383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:44.749 [2024-11-28 09:10:38.725395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.749 [2024-11-28 09:10:38.725405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.725470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.749 [2024-11-28 09:10:38.725480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:44.749 [2024-11-28 09:10:38.725489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.749 [2024-11-28 09:10:38.725498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.725570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.749 [2024-11-28 09:10:38.725597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:44.749 [2024-11-28 09:10:38.725611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.749 [2024-11-28 09:10:38.725627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.725650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.749 [2024-11-28 09:10:38.725668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:44.749 [2024-11-28 09:10:38.725682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.749 [2024-11-28 09:10:38.725695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.744365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.749 [2024-11-28 09:10:38.744425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:44.749 [2024-11-28 09:10:38.744437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.749 [2024-11-28 09:10:38.744447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.759123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.749 [2024-11-28 09:10:38.759197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:44.749 [2024-11-28 09:10:38.759212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.749 [2024-11-28 09:10:38.759221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.759282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.749 [2024-11-28 09:10:38.759293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:44.749 [2024-11-28 09:10:38.759302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.749 [2024-11-28 09:10:38.759311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.759350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.749 [2024-11-28 09:10:38.759360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:44.749 [2024-11-28 09:10:38.759375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.749 [2024-11-28 09:10:38.759387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.759471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.749 [2024-11-28 09:10:38.759483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:44.749 [2024-11-28 09:10:38.759493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.749 [2024-11-28 09:10:38.759501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.759532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.749 [2024-11-28 09:10:38.759541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:44.749 [2024-11-28 09:10:38.759550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.749 [2024-11-28 09:10:38.759566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.759615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.749 [2024-11-28 09:10:38.759625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:44.749 [2024-11-28 09:10:38.759634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.749 [2024-11-28 09:10:38.759643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.759696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.749 [2024-11-28 09:10:38.759708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:44.749 [2024-11-28 09:10:38.759723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.749 [2024-11-28 09:10:38.759733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.749 [2024-11-28 09:10:38.759914] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 88.540 ms, result 0 00:26:45.011 00:26:45.011 00:26:45.011 09:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:47.560 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:47.560 09:10:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:47.560 09:10:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:47.560 09:10:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:47.560 09:10:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:47.560 09:10:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:47.560 09:10:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:47.560 09:10:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:47.560 09:10:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89642 00:26:47.560 09:10:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89642 ']' 00:26:47.560 Process with pid 89642 is not found 00:26:47.560 09:10:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 89642 00:26:47.560 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89642) - No such process 00:26:47.560 09:10:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 89642 is not found' 00:26:47.560 09:10:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:48.134 Remove shared memory files 00:26:48.134 09:10:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:48.134 09:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:48.134 09:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:48.134 09:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:48.134 09:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:48.134 09:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:48.134 09:10:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:48.134 00:26:48.134 real 4m2.682s 00:26:48.134 user 4m29.415s 00:26:48.134 sys 0m27.431s 00:26:48.134 09:10:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:48.134 ************************************ 00:26:48.134 END TEST ftl_dirty_shutdown 00:26:48.134 ************************************ 00:26:48.134 09:10:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:48.134 09:10:42 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:48.134 09:10:42 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:48.134 09:10:42 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:48.134 09:10:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:48.134 ************************************ 00:26:48.134 START TEST ftl_upgrade_shutdown 00:26:48.134 ************************************ 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:48.134 * Looking for test storage... 00:26:48.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.134 09:10:42 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:48.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.135 --rc genhtml_branch_coverage=1 00:26:48.135 --rc genhtml_function_coverage=1 00:26:48.135 --rc genhtml_legend=1 00:26:48.135 --rc geninfo_all_blocks=1 00:26:48.135 --rc geninfo_unexecuted_blocks=1 00:26:48.135 00:26:48.135 ' 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:48.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.135 --rc genhtml_branch_coverage=1 00:26:48.135 --rc genhtml_function_coverage=1 00:26:48.135 --rc genhtml_legend=1 00:26:48.135 --rc geninfo_all_blocks=1 00:26:48.135 --rc geninfo_unexecuted_blocks=1 00:26:48.135 00:26:48.135 ' 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:48.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.135 --rc genhtml_branch_coverage=1 00:26:48.135 --rc genhtml_function_coverage=1 00:26:48.135 --rc genhtml_legend=1 00:26:48.135 --rc geninfo_all_blocks=1 00:26:48.135 --rc geninfo_unexecuted_blocks=1 00:26:48.135 00:26:48.135 ' 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:48.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.135 --rc genhtml_branch_coverage=1 00:26:48.135 --rc genhtml_function_coverage=1 00:26:48.135 --rc genhtml_legend=1 00:26:48.135 --rc geninfo_all_blocks=1 00:26:48.135 --rc geninfo_unexecuted_blocks=1 00:26:48.135 00:26:48.135 ' 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92273 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92273 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92273 ']' 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:48.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:48.135 09:10:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:48.397 [2024-11-28 09:10:42.320376] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:48.397 [2024-11-28 09:10:42.320555] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92273 ] 00:26:48.397 [2024-11-28 09:10:42.477832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.658 [2024-11-28 09:10:42.552947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:49.234 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:49.495 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:49.495 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:49.495 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:49.495 09:10:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:26:49.495 09:10:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:49.495 09:10:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:49.495 09:10:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:49.495 09:10:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:49.756 09:10:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:49.756 { 00:26:49.756 "name": "basen1", 00:26:49.756 "aliases": [ 00:26:49.756 "a20088b9-5f3e-425e-81e2-bc1a05d2b82a" 00:26:49.756 ], 00:26:49.756 "product_name": "NVMe disk", 00:26:49.756 "block_size": 4096, 00:26:49.756 "num_blocks": 1310720, 00:26:49.756 "uuid": "a20088b9-5f3e-425e-81e2-bc1a05d2b82a", 00:26:49.756 "numa_id": -1, 00:26:49.756 "assigned_rate_limits": { 00:26:49.756 "rw_ios_per_sec": 0, 00:26:49.756 "rw_mbytes_per_sec": 0, 00:26:49.756 "r_mbytes_per_sec": 0, 00:26:49.756 "w_mbytes_per_sec": 0 00:26:49.756 }, 00:26:49.756 "claimed": true, 00:26:49.756 "claim_type": "read_many_write_one", 00:26:49.756 "zoned": false, 00:26:49.756 "supported_io_types": { 00:26:49.757 "read": true, 00:26:49.757 "write": true, 00:26:49.757 "unmap": true, 00:26:49.757 "flush": true, 00:26:49.757 "reset": true, 00:26:49.757 "nvme_admin": true, 00:26:49.757 "nvme_io": true, 00:26:49.757 "nvme_io_md": false, 00:26:49.757 "write_zeroes": true, 00:26:49.757 "zcopy": false, 00:26:49.757 "get_zone_info": false, 00:26:49.757 "zone_management": false, 00:26:49.757 "zone_append": false, 00:26:49.757 "compare": true, 00:26:49.757 "compare_and_write": false, 00:26:49.757 "abort": true, 00:26:49.757 "seek_hole": false, 00:26:49.757 "seek_data": false, 00:26:49.757 "copy": true, 00:26:49.757 "nvme_iov_md": false 00:26:49.757 }, 00:26:49.757 "driver_specific": { 00:26:49.757 "nvme": [ 00:26:49.757 { 00:26:49.757 "pci_address": "0000:00:11.0", 00:26:49.757 "trid": { 00:26:49.757 "trtype": "PCIe", 00:26:49.757 "traddr": "0000:00:11.0" 00:26:49.757 }, 00:26:49.757 "ctrlr_data": { 00:26:49.757 "cntlid": 0, 00:26:49.757 "vendor_id": "0x1b36", 00:26:49.757 "model_number": "QEMU NVMe Ctrl", 00:26:49.757 "serial_number": "12341", 00:26:49.757 "firmware_revision": "8.0.0", 00:26:49.757 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:49.757 "oacs": { 00:26:49.757 "security": 0, 00:26:49.757 "format": 1, 00:26:49.757 "firmware": 0, 00:26:49.757 "ns_manage": 1 00:26:49.757 }, 00:26:49.757 "multi_ctrlr": false, 00:26:49.757 "ana_reporting": false 00:26:49.757 }, 00:26:49.757 "vs": { 00:26:49.757 "nvme_version": "1.4" 00:26:49.757 }, 00:26:49.757 "ns_data": { 00:26:49.757 "id": 1, 00:26:49.757 "can_share": false 00:26:49.757 } 00:26:49.757 } 00:26:49.757 ], 00:26:49.757 "mp_policy": "active_passive" 00:26:49.757 } 00:26:49.757 } 00:26:49.757 ]' 00:26:49.757 09:10:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:49.757 09:10:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:49.757 09:10:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:49.757 09:10:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:49.757 09:10:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:49.757 09:10:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:26:49.757 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:49.757 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:49.757 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:49.757 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:49.757 09:10:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:50.018 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a76b09f7-8cb4-43d3-aa85-8a8dacbdd764 00:26:50.018 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:50.018 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a76b09f7-8cb4-43d3-aa85-8a8dacbdd764 00:26:50.280 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:50.541 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=846d5a56-ca3f-48e7-aa04-557594324c75 00:26:50.541 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 846d5a56-ca3f-48e7-aa04-557594324c75 00:26:50.803 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=fcf040ef-4a84-45ee-a00f-cb36b45a94f5 00:26:50.803 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z fcf040ef-4a84-45ee-a00f-cb36b45a94f5 ]] 00:26:50.803 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 fcf040ef-4a84-45ee-a00f-cb36b45a94f5 5120 00:26:50.803 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:50.803 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:50.803 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=fcf040ef-4a84-45ee-a00f-cb36b45a94f5 00:26:50.803 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:50.803 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size fcf040ef-4a84-45ee-a00f-cb36b45a94f5 00:26:50.803 09:10:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=fcf040ef-4a84-45ee-a00f-cb36b45a94f5 00:26:50.803 09:10:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:50.803 09:10:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:50.803 09:10:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:50.803 09:10:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fcf040ef-4a84-45ee-a00f-cb36b45a94f5 00:26:51.065 09:10:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:51.065 { 00:26:51.065 "name": "fcf040ef-4a84-45ee-a00f-cb36b45a94f5", 00:26:51.065 "aliases": [ 00:26:51.065 "lvs/basen1p0" 00:26:51.065 ], 00:26:51.065 "product_name": "Logical Volume", 00:26:51.065 "block_size": 4096, 00:26:51.065 "num_blocks": 5242880, 00:26:51.065 "uuid": "fcf040ef-4a84-45ee-a00f-cb36b45a94f5", 00:26:51.065 "assigned_rate_limits": { 00:26:51.065 "rw_ios_per_sec": 0, 00:26:51.065 "rw_mbytes_per_sec": 0, 00:26:51.065 "r_mbytes_per_sec": 0, 00:26:51.065 "w_mbytes_per_sec": 0 00:26:51.065 }, 00:26:51.065 "claimed": false, 00:26:51.065 "zoned": false, 00:26:51.065 "supported_io_types": { 00:26:51.065 "read": true, 00:26:51.065 "write": true, 00:26:51.065 "unmap": true, 00:26:51.065 "flush": false, 00:26:51.065 "reset": true, 00:26:51.065 "nvme_admin": false, 00:26:51.065 "nvme_io": false, 00:26:51.065 "nvme_io_md": false, 00:26:51.065 "write_zeroes": true, 00:26:51.065 "zcopy": false, 00:26:51.065 "get_zone_info": false, 00:26:51.065 "zone_management": false, 00:26:51.065 "zone_append": false, 00:26:51.065 "compare": false, 00:26:51.065 "compare_and_write": false, 00:26:51.065 "abort": false, 00:26:51.065 "seek_hole": true, 00:26:51.065 "seek_data": true, 00:26:51.065 "copy": false, 00:26:51.065 "nvme_iov_md": false 00:26:51.065 }, 00:26:51.065 "driver_specific": { 00:26:51.065 "lvol": { 00:26:51.065 "lvol_store_uuid": "846d5a56-ca3f-48e7-aa04-557594324c75", 00:26:51.065 "base_bdev": "basen1", 00:26:51.065 "thin_provision": true, 00:26:51.065 "num_allocated_clusters": 0, 00:26:51.065 "snapshot": false, 00:26:51.065 "clone": false, 00:26:51.065 "esnap_clone": false 00:26:51.065 } 00:26:51.065 } 00:26:51.065 } 00:26:51.065 ]' 00:26:51.065 09:10:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:51.065 09:10:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:51.065 09:10:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:51.065 09:10:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:26:51.065 09:10:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:26:51.065 09:10:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:26:51.065 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:51.065 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:51.065 09:10:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:51.326 09:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:51.326 09:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:51.326 09:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:51.585 09:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:51.585 09:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:51.586 09:10:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d fcf040ef-4a84-45ee-a00f-cb36b45a94f5 -c cachen1p0 --l2p_dram_limit 2 00:26:51.586 [2024-11-28 09:10:45.681593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.586 [2024-11-28 09:10:45.681638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:51.586 [2024-11-28 09:10:45.681650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:51.586 [2024-11-28 09:10:45.681659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.586 [2024-11-28 09:10:45.681706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.586 [2024-11-28 09:10:45.681716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:51.586 [2024-11-28 09:10:45.681723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:26:51.586 [2024-11-28 09:10:45.681733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.586 [2024-11-28 09:10:45.681753] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:51.586 [2024-11-28 09:10:45.681999] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:51.586 [2024-11-28 09:10:45.682011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.586 [2024-11-28 09:10:45.682020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:51.586 [2024-11-28 09:10:45.682029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:26:51.586 [2024-11-28 09:10:45.682038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.586 [2024-11-28 09:10:45.682063] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 34e90906-802b-41ad-a111-9e2030352ea7 00:26:51.586 [2024-11-28 09:10:45.683340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.586 [2024-11-28 09:10:45.683459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:51.586 [2024-11-28 09:10:45.683475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:51.586 [2024-11-28 09:10:45.683482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.586 [2024-11-28 09:10:45.690593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.586 [2024-11-28 09:10:45.690684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:51.586 [2024-11-28 09:10:45.690736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.043 ms 00:26:51.586 [2024-11-28 09:10:45.690754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.586 [2024-11-28 09:10:45.690815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.586 [2024-11-28 09:10:45.690955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:51.586 [2024-11-28 09:10:45.690977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:51.586 [2024-11-28 09:10:45.690995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.586 [2024-11-28 09:10:45.691050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.586 [2024-11-28 09:10:45.691069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:51.586 [2024-11-28 09:10:45.691086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:51.586 [2024-11-28 09:10:45.691138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.586 [2024-11-28 09:10:45.691174] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:51.586 [2024-11-28 09:10:45.692858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.586 [2024-11-28 09:10:45.692952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:51.586 [2024-11-28 09:10:45.692999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.691 ms 00:26:51.586 [2024-11-28 09:10:45.693018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.586 [2024-11-28 09:10:45.693049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.586 [2024-11-28 09:10:45.693067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:51.586 [2024-11-28 09:10:45.693186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:51.586 [2024-11-28 09:10:45.693212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.586 [2024-11-28 09:10:45.693236] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:51.586 [2024-11-28 09:10:45.693361] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:51.586 [2024-11-28 09:10:45.693457] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:51.586 [2024-11-28 09:10:45.693485] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:51.586 [2024-11-28 09:10:45.693510] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:51.586 [2024-11-28 09:10:45.693546] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:51.586 [2024-11-28 09:10:45.693570] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:51.586 [2024-11-28 09:10:45.693605] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:51.586 [2024-11-28 09:10:45.693753] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:51.586 [2024-11-28 09:10:45.693781] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:51.586 [2024-11-28 09:10:45.693808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.586 [2024-11-28 09:10:45.693832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:51.586 [2024-11-28 09:10:45.693847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.573 ms 00:26:51.586 [2024-11-28 09:10:45.693863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.586 [2024-11-28 09:10:45.693975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.586 [2024-11-28 09:10:45.693998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:51.586 [2024-11-28 09:10:45.694014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:26:51.586 [2024-11-28 09:10:45.694030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.586 [2024-11-28 09:10:45.694112] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:51.586 [2024-11-28 09:10:45.694135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:51.586 [2024-11-28 09:10:45.694151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:51.586 [2024-11-28 09:10:45.694169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.586 [2024-11-28 09:10:45.694210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:51.586 [2024-11-28 09:10:45.694228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:51.586 [2024-11-28 09:10:45.694244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:51.586 [2024-11-28 09:10:45.694260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:51.586 [2024-11-28 09:10:45.694274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:51.586 [2024-11-28 09:10:45.694289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.586 [2024-11-28 09:10:45.694328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:51.586 [2024-11-28 09:10:45.694346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:51.586 [2024-11-28 09:10:45.694360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.586 [2024-11-28 09:10:45.694377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:51.586 [2024-11-28 09:10:45.694391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:51.586 [2024-11-28 09:10:45.694407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.586 [2024-11-28 09:10:45.694441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:51.586 [2024-11-28 09:10:45.694460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:51.586 [2024-11-28 09:10:45.694475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.586 [2024-11-28 09:10:45.694491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:51.586 [2024-11-28 09:10:45.694504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:51.586 [2024-11-28 09:10:45.694545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:51.586 [2024-11-28 09:10:45.694562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:51.586 [2024-11-28 09:10:45.694577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:51.586 [2024-11-28 09:10:45.694591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:51.586 [2024-11-28 09:10:45.694606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:51.586 [2024-11-28 09:10:45.694620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:51.586 [2024-11-28 09:10:45.694653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:51.586 [2024-11-28 09:10:45.694706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:51.586 [2024-11-28 09:10:45.694728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:51.586 [2024-11-28 09:10:45.694760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:51.586 [2024-11-28 09:10:45.694779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:51.586 [2024-11-28 09:10:45.694793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:51.586 [2024-11-28 09:10:45.694843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.586 [2024-11-28 09:10:45.694859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:51.586 [2024-11-28 09:10:45.694875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:51.586 [2024-11-28 09:10:45.694905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.586 [2024-11-28 09:10:45.694923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:51.586 [2024-11-28 09:10:45.694985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:51.586 [2024-11-28 09:10:45.695003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.586 [2024-11-28 09:10:45.695017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:51.586 [2024-11-28 09:10:45.695033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:51.586 [2024-11-28 09:10:45.695047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.586 [2024-11-28 09:10:45.695063] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:51.586 [2024-11-28 09:10:45.695124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:51.586 [2024-11-28 09:10:45.695145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:51.586 [2024-11-28 09:10:45.695160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:51.586 [2024-11-28 09:10:45.695177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:51.586 [2024-11-28 09:10:45.695191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:51.586 [2024-11-28 09:10:45.695207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:51.586 [2024-11-28 09:10:45.695267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:51.586 [2024-11-28 09:10:45.695285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:51.586 [2024-11-28 09:10:45.695299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:51.586 [2024-11-28 09:10:45.695319] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:51.586 [2024-11-28 09:10:45.695344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:51.586 [2024-11-28 09:10:45.695395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:51.586 [2024-11-28 09:10:45.695545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:51.586 [2024-11-28 09:10:45.695571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:51.586 [2024-11-28 09:10:45.695610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:51.586 [2024-11-28 09:10:45.695634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:51.586 [2024-11-28 09:10:45.695679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:51.586 [2024-11-28 09:10:45.695706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:51.586 [2024-11-28 09:10:45.695728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:51.586 [2024-11-28 09:10:45.695770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:51.586 [2024-11-28 09:10:45.695792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:51.586 [2024-11-28 09:10:45.695893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:51.586 [2024-11-28 09:10:45.695917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:51.586 [2024-11-28 09:10:45.695940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:51.587 [2024-11-28 09:10:45.696027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:51.587 [2024-11-28 09:10:45.696053] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:51.587 [2024-11-28 09:10:45.696079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:51.587 [2024-11-28 09:10:45.696133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:51.587 [2024-11-28 09:10:45.696156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:51.587 [2024-11-28 09:10:45.696181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:51.587 [2024-11-28 09:10:45.696221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:51.587 [2024-11-28 09:10:45.696247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:51.587 [2024-11-28 09:10:45.696268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:51.587 [2024-11-28 09:10:45.696288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.182 ms 00:26:51.587 [2024-11-28 09:10:45.696304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:51.587 [2024-11-28 09:10:45.696362] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:51.587 [2024-11-28 09:10:45.696391] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:55.789 [2024-11-28 09:10:49.866753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.789 [2024-11-28 09:10:49.867062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:55.789 [2024-11-28 09:10:49.867152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4170.369 ms 00:26:55.789 [2024-11-28 09:10:49.867178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.789 [2024-11-28 09:10:49.879394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.789 [2024-11-28 09:10:49.879546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:55.789 [2024-11-28 09:10:49.879611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.100 ms 00:26:55.789 [2024-11-28 09:10:49.879635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.789 [2024-11-28 09:10:49.879700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.789 [2024-11-28 09:10:49.879723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:55.789 [2024-11-28 09:10:49.879749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:55.789 [2024-11-28 09:10:49.879767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.789 [2024-11-28 09:10:49.890474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.789 [2024-11-28 09:10:49.890605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:55.789 [2024-11-28 09:10:49.890672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.618 ms 00:26:55.789 [2024-11-28 09:10:49.890701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.789 [2024-11-28 09:10:49.890750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.789 [2024-11-28 09:10:49.890775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:55.789 [2024-11-28 09:10:49.890808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:55.789 [2024-11-28 09:10:49.890829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.789 [2024-11-28 09:10:49.891344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.789 [2024-11-28 09:10:49.891448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:55.789 [2024-11-28 09:10:49.891530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.412 ms 00:26:55.789 [2024-11-28 09:10:49.891541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.789 [2024-11-28 09:10:49.891589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.789 [2024-11-28 09:10:49.891597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:55.789 [2024-11-28 09:10:49.891610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:55.789 [2024-11-28 09:10:49.891618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.049 [2024-11-28 09:10:49.908860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.049 [2024-11-28 09:10:49.908918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:56.049 [2024-11-28 09:10:49.908941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.216 ms 00:26:56.049 [2024-11-28 09:10:49.908955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.049 [2024-11-28 09:10:49.919269] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:56.049 [2024-11-28 09:10:49.920286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.049 [2024-11-28 09:10:49.920419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:56.049 [2024-11-28 09:10:49.920436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.204 ms 00:26:56.049 [2024-11-28 09:10:49.920451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.049 [2024-11-28 09:10:49.934844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.049 [2024-11-28 09:10:49.934971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:56.049 [2024-11-28 09:10:49.934989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.362 ms 00:26:56.049 [2024-11-28 09:10:49.935002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.049 [2024-11-28 09:10:49.935083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.049 [2024-11-28 09:10:49.935096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:56.049 [2024-11-28 09:10:49.935105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:26:56.049 [2024-11-28 09:10:49.935115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.049 [2024-11-28 09:10:49.938368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.049 [2024-11-28 09:10:49.938479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:56.049 [2024-11-28 09:10:49.938496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.234 ms 00:26:56.049 [2024-11-28 09:10:49.938506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.049 [2024-11-28 09:10:49.941896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.049 [2024-11-28 09:10:49.941933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:56.049 [2024-11-28 09:10:49.941943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.356 ms 00:26:56.049 [2024-11-28 09:10:49.941952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.049 [2024-11-28 09:10:49.942246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.049 [2024-11-28 09:10:49.942258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:56.049 [2024-11-28 09:10:49.942267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.262 ms 00:26:56.049 [2024-11-28 09:10:49.942278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.049 [2024-11-28 09:10:49.972845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.049 [2024-11-28 09:10:49.972969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:56.049 [2024-11-28 09:10:49.972990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.549 ms 00:26:56.049 [2024-11-28 09:10:49.973000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.049 [2024-11-28 09:10:49.977873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.049 [2024-11-28 09:10:49.977911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:56.049 [2024-11-28 09:10:49.977922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.827 ms 00:26:56.049 [2024-11-28 09:10:49.977931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.049 [2024-11-28 09:10:49.981772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.049 [2024-11-28 09:10:49.981816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:56.049 [2024-11-28 09:10:49.981825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.807 ms 00:26:56.049 [2024-11-28 09:10:49.981834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.049 [2024-11-28 09:10:49.986118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.049 [2024-11-28 09:10:49.986158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:56.049 [2024-11-28 09:10:49.986168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.251 ms 00:26:56.049 [2024-11-28 09:10:49.986180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.049 [2024-11-28 09:10:49.986218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.049 [2024-11-28 09:10:49.986229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:56.049 [2024-11-28 09:10:49.986238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:56.049 [2024-11-28 09:10:49.986252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.049 [2024-11-28 09:10:49.986317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.049 [2024-11-28 09:10:49.986328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:56.049 [2024-11-28 09:10:49.986337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:56.049 [2024-11-28 09:10:49.986346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.049 [2024-11-28 09:10:49.987295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4305.266 ms, result 0 00:26:56.049 { 00:26:56.049 "name": "ftl", 00:26:56.049 "uuid": "34e90906-802b-41ad-a111-9e2030352ea7" 00:26:56.049 } 00:26:56.049 09:10:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:56.309 [2024-11-28 09:10:50.195793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.309 09:10:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:56.309 09:10:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:56.568 [2024-11-28 09:10:50.612278] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:56.568 09:10:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:56.827 [2024-11-28 09:10:50.832835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:56.827 09:10:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:57.395 Fill FTL, iteration 1 00:26:57.395 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:57.395 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:57.395 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:57.395 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:57.395 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=92395 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 92395 /var/tmp/spdk.tgt.sock 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92395 ']' 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:57.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:57.396 09:10:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:57.396 [2024-11-28 09:10:51.304793] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:57.396 [2024-11-28 09:10:51.304956] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92395 ] 00:26:57.396 [2024-11-28 09:10:51.455368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.396 [2024-11-28 09:10:51.492792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.335 09:10:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:58.335 09:10:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:58.335 09:10:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:58.335 ftln1 00:26:58.335 09:10:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:58.335 09:10:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:58.592 09:10:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:58.592 09:10:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 92395 00:26:58.592 09:10:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92395 ']' 00:26:58.592 09:10:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92395 00:26:58.592 09:10:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:58.592 09:10:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:58.592 09:10:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92395 00:26:58.592 killing process with pid 92395 00:26:58.592 09:10:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:58.592 09:10:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:58.592 09:10:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92395' 00:26:58.592 09:10:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92395 00:26:58.592 09:10:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92395 00:26:58.850 09:10:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:58.850 09:10:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:59.107 [2024-11-28 09:10:53.000138] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:59.107 [2024-11-28 09:10:53.000888] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92431 ] 00:26:59.108 [2024-11-28 09:10:53.150207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.108 [2024-11-28 09:10:53.182395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.488  [2024-11-28T09:10:55.547Z] Copying: 180/1024 [MB] (180 MBps) [2024-11-28T09:10:56.490Z] Copying: 425/1024 [MB] (245 MBps) [2024-11-28T09:10:57.433Z] Copying: 662/1024 [MB] (237 MBps) [2024-11-28T09:10:58.004Z] Copying: 902/1024 [MB] (240 MBps) [2024-11-28T09:10:58.264Z] Copying: 1024/1024 [MB] (average 227 MBps) 00:27:04.144 00:27:04.144 09:10:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:04.144 09:10:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:04.144 Calculate MD5 checksum, iteration 1 00:27:04.144 09:10:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:04.144 09:10:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:04.144 09:10:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:04.144 09:10:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:04.144 09:10:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:04.144 09:10:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:04.144 [2024-11-28 09:10:58.106831] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:04.144 [2024-11-28 09:10:58.107301] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92484 ] 00:27:04.144 [2024-11-28 09:10:58.256131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.403 [2024-11-28 09:10:58.285265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.343  [2024-11-28T09:11:00.035Z] Copying: 654/1024 [MB] (654 MBps) [2024-11-28T09:11:00.296Z] Copying: 1024/1024 [MB] (average 652 MBps) 00:27:06.176 00:27:06.176 09:11:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:06.176 09:11:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:08.725 09:11:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:08.725 09:11:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=4680b977e119e6348c21256fdc853eeb 00:27:08.725 09:11:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:08.725 09:11:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:08.725 09:11:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:08.725 Fill FTL, iteration 2 00:27:08.725 09:11:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:08.725 09:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:08.725 09:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:08.725 09:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:08.725 09:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:08.725 09:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:08.725 [2024-11-28 09:11:02.499432] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:08.725 [2024-11-28 09:11:02.499708] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92536 ] 00:27:08.725 [2024-11-28 09:11:02.646882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.725 [2024-11-28 09:11:02.675401] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.733  [2024-11-28T09:11:05.239Z] Copying: 233/1024 [MB] (233 MBps) [2024-11-28T09:11:06.178Z] Copying: 479/1024 [MB] (246 MBps) [2024-11-28T09:11:07.120Z] Copying: 722/1024 [MB] (243 MBps) [2024-11-28T09:11:07.120Z] Copying: 966/1024 [MB] (244 MBps) [2024-11-28T09:11:07.381Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:27:13.261 00:27:13.261 09:11:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:13.261 Calculate MD5 checksum, iteration 2 00:27:13.261 09:11:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:13.261 09:11:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:13.261 09:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:13.261 09:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:13.261 09:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:13.261 09:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:13.261 09:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:13.261 [2024-11-28 09:11:07.339345] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:13.261 [2024-11-28 09:11:07.339660] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92592 ] 00:27:13.522 [2024-11-28 09:11:07.487681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.522 [2024-11-28 09:11:07.529839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.904  [2024-11-28T09:11:09.593Z] Copying: 647/1024 [MB] (647 MBps) [2024-11-28T09:11:10.164Z] Copying: 1024/1024 [MB] (average 619 MBps) 00:27:16.044 00:27:16.044 09:11:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:16.044 09:11:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:17.946 09:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:17.946 09:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=9b9841e161139a3f4d80d6211a116899 00:27:17.946 09:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:17.946 09:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:17.946 09:11:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:18.204 [2024-11-28 09:11:12.158447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.204 [2024-11-28 09:11:12.158491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:18.204 [2024-11-28 09:11:12.158505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:18.204 [2024-11-28 09:11:12.158512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.204 [2024-11-28 09:11:12.158530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.204 [2024-11-28 09:11:12.158538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:18.204 [2024-11-28 09:11:12.158547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:18.204 [2024-11-28 09:11:12.158554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.204 [2024-11-28 09:11:12.158570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.204 [2024-11-28 09:11:12.158577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:18.204 [2024-11-28 09:11:12.158584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:18.204 [2024-11-28 09:11:12.158590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.204 [2024-11-28 09:11:12.158649] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.192 ms, result 0 00:27:18.204 true 00:27:18.204 09:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:18.462 { 00:27:18.462 "name": "ftl", 00:27:18.462 "properties": [ 00:27:18.462 { 00:27:18.462 "name": "superblock_version", 00:27:18.462 "value": 5, 00:27:18.462 "read-only": true 00:27:18.462 }, 00:27:18.462 { 00:27:18.462 "name": "base_device", 00:27:18.462 "bands": [ 00:27:18.462 { 00:27:18.462 "id": 0, 00:27:18.462 "state": "FREE", 00:27:18.462 "validity": 0.0 00:27:18.462 }, 00:27:18.462 { 00:27:18.462 "id": 1, 00:27:18.462 "state": "FREE", 00:27:18.462 "validity": 0.0 00:27:18.462 }, 00:27:18.462 { 00:27:18.462 "id": 2, 00:27:18.462 "state": "FREE", 00:27:18.462 "validity": 0.0 00:27:18.462 }, 00:27:18.462 { 00:27:18.462 "id": 3, 00:27:18.462 "state": "FREE", 00:27:18.462 "validity": 0.0 00:27:18.462 }, 00:27:18.462 { 00:27:18.462 "id": 4, 00:27:18.462 "state": "FREE", 00:27:18.462 "validity": 0.0 00:27:18.462 }, 00:27:18.462 { 00:27:18.462 "id": 5, 00:27:18.462 "state": "FREE", 00:27:18.462 "validity": 0.0 00:27:18.462 }, 00:27:18.462 { 00:27:18.462 "id": 6, 00:27:18.462 "state": "FREE", 00:27:18.462 "validity": 0.0 00:27:18.462 }, 00:27:18.462 { 00:27:18.462 "id": 7, 00:27:18.462 "state": "FREE", 00:27:18.462 "validity": 0.0 00:27:18.462 }, 00:27:18.462 { 00:27:18.462 "id": 8, 00:27:18.462 "state": "FREE", 00:27:18.462 "validity": 0.0 00:27:18.462 }, 00:27:18.462 { 00:27:18.462 "id": 9, 00:27:18.462 "state": "FREE", 00:27:18.462 "validity": 0.0 00:27:18.462 }, 00:27:18.462 { 00:27:18.462 "id": 10, 00:27:18.462 "state": "FREE", 00:27:18.462 "validity": 0.0 00:27:18.462 }, 00:27:18.462 { 00:27:18.462 "id": 11, 00:27:18.462 "state": "FREE", 00:27:18.463 "validity": 0.0 00:27:18.463 }, 00:27:18.463 { 00:27:18.463 "id": 12, 00:27:18.463 "state": "FREE", 00:27:18.463 "validity": 0.0 00:27:18.463 }, 00:27:18.463 { 00:27:18.463 "id": 13, 00:27:18.463 "state": "FREE", 00:27:18.463 "validity": 0.0 00:27:18.463 }, 00:27:18.463 { 00:27:18.463 "id": 14, 00:27:18.463 "state": "FREE", 00:27:18.463 "validity": 0.0 00:27:18.463 }, 00:27:18.463 { 00:27:18.463 "id": 15, 00:27:18.463 "state": "FREE", 00:27:18.463 "validity": 0.0 00:27:18.463 }, 00:27:18.463 { 00:27:18.463 "id": 16, 00:27:18.463 "state": "FREE", 00:27:18.463 "validity": 0.0 00:27:18.463 }, 00:27:18.463 { 00:27:18.463 "id": 17, 00:27:18.463 "state": "FREE", 00:27:18.463 "validity": 0.0 00:27:18.463 } 00:27:18.463 ], 00:27:18.463 "read-only": true 00:27:18.463 }, 00:27:18.463 { 00:27:18.463 "name": "cache_device", 00:27:18.463 "type": "bdev", 00:27:18.463 "chunks": [ 00:27:18.463 { 00:27:18.463 "id": 0, 00:27:18.463 "state": "INACTIVE", 00:27:18.463 "utilization": 0.0 00:27:18.463 }, 00:27:18.463 { 00:27:18.463 "id": 1, 00:27:18.463 "state": "CLOSED", 00:27:18.463 "utilization": 1.0 00:27:18.463 }, 00:27:18.463 { 00:27:18.463 "id": 2, 00:27:18.463 "state": "CLOSED", 00:27:18.463 "utilization": 1.0 00:27:18.463 }, 00:27:18.463 { 00:27:18.463 "id": 3, 00:27:18.463 "state": "OPEN", 00:27:18.463 "utilization": 0.001953125 00:27:18.463 }, 00:27:18.463 { 00:27:18.463 "id": 4, 00:27:18.463 "state": "OPEN", 00:27:18.463 "utilization": 0.0 00:27:18.463 } 00:27:18.463 ], 00:27:18.463 "read-only": true 00:27:18.463 }, 00:27:18.463 { 00:27:18.463 "name": "verbose_mode", 00:27:18.463 "value": true, 00:27:18.463 "unit": "", 00:27:18.463 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:18.463 }, 00:27:18.463 { 00:27:18.463 "name": "prep_upgrade_on_shutdown", 00:27:18.463 "value": false, 00:27:18.463 "unit": "", 00:27:18.463 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:18.463 } 00:27:18.463 ] 00:27:18.463 } 00:27:18.463 09:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:18.463 [2024-11-28 09:11:12.558749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.463 [2024-11-28 09:11:12.558786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:18.463 [2024-11-28 09:11:12.558809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:18.463 [2024-11-28 09:11:12.558815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.463 [2024-11-28 09:11:12.558833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.463 [2024-11-28 09:11:12.558840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:18.463 [2024-11-28 09:11:12.558846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:18.463 [2024-11-28 09:11:12.558852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.463 [2024-11-28 09:11:12.558867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.463 [2024-11-28 09:11:12.558874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:18.463 [2024-11-28 09:11:12.558881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:18.463 [2024-11-28 09:11:12.558886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.463 [2024-11-28 09:11:12.558934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.178 ms, result 0 00:27:18.463 true 00:27:18.463 09:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:18.463 09:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:18.463 09:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:18.722 09:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:18.722 09:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:18.722 09:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:18.980 [2024-11-28 09:11:12.967125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.980 [2024-11-28 09:11:12.967165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:18.980 [2024-11-28 09:11:12.967176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:18.980 [2024-11-28 09:11:12.967182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.980 [2024-11-28 09:11:12.967199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.980 [2024-11-28 09:11:12.967207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:18.980 [2024-11-28 09:11:12.967213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:18.980 [2024-11-28 09:11:12.967219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.980 [2024-11-28 09:11:12.967234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.980 [2024-11-28 09:11:12.967241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:18.980 [2024-11-28 09:11:12.967247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:18.980 [2024-11-28 09:11:12.967252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.980 [2024-11-28 09:11:12.967302] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.168 ms, result 0 00:27:18.980 true 00:27:18.980 09:11:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:19.239 { 00:27:19.239 "name": "ftl", 00:27:19.239 "properties": [ 00:27:19.239 { 00:27:19.239 "name": "superblock_version", 00:27:19.239 "value": 5, 00:27:19.239 "read-only": true 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "name": "base_device", 00:27:19.239 "bands": [ 00:27:19.239 { 00:27:19.239 "id": 0, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 1, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 2, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 3, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 4, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 5, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 6, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 7, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 8, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 9, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 10, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 11, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 12, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 13, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 14, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 15, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 16, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "id": 17, 00:27:19.239 "state": "FREE", 00:27:19.239 "validity": 0.0 00:27:19.239 } 00:27:19.239 ], 00:27:19.239 "read-only": true 00:27:19.239 }, 00:27:19.239 { 00:27:19.239 "name": "cache_device", 00:27:19.239 "type": "bdev", 00:27:19.239 "chunks": [ 00:27:19.239 { 00:27:19.239 "id": 0, 00:27:19.239 "state": "INACTIVE", 00:27:19.239 "utilization": 0.0 00:27:19.239 }, 00:27:19.240 { 00:27:19.240 "id": 1, 00:27:19.240 "state": "CLOSED", 00:27:19.240 "utilization": 1.0 00:27:19.240 }, 00:27:19.240 { 00:27:19.240 "id": 2, 00:27:19.240 "state": "CLOSED", 00:27:19.240 "utilization": 1.0 00:27:19.240 }, 00:27:19.240 { 00:27:19.240 "id": 3, 00:27:19.240 "state": "OPEN", 00:27:19.240 "utilization": 0.001953125 00:27:19.240 }, 00:27:19.240 { 00:27:19.240 "id": 4, 00:27:19.240 "state": "OPEN", 00:27:19.240 "utilization": 0.0 00:27:19.240 } 00:27:19.240 ], 00:27:19.240 "read-only": true 00:27:19.240 }, 00:27:19.240 { 00:27:19.240 "name": "verbose_mode", 00:27:19.240 "value": true, 00:27:19.240 "unit": "", 00:27:19.240 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:19.240 }, 00:27:19.240 { 00:27:19.240 "name": "prep_upgrade_on_shutdown", 00:27:19.240 "value": true, 00:27:19.240 "unit": "", 00:27:19.240 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:19.240 } 00:27:19.240 ] 00:27:19.240 } 00:27:19.240 09:11:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:19.240 09:11:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92273 ]] 00:27:19.240 09:11:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92273 00:27:19.240 09:11:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92273 ']' 00:27:19.240 09:11:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92273 00:27:19.240 09:11:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:19.240 09:11:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:19.240 09:11:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92273 00:27:19.240 killing process with pid 92273 00:27:19.240 09:11:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:19.240 09:11:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:19.240 09:11:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92273' 00:27:19.240 09:11:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92273 00:27:19.240 09:11:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92273 00:27:19.240 [2024-11-28 09:11:13.329264] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:19.240 [2024-11-28 09:11:13.333162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.240 [2024-11-28 09:11:13.333194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:19.240 [2024-11-28 09:11:13.333206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:19.240 [2024-11-28 09:11:13.333212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.240 [2024-11-28 09:11:13.333230] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:19.240 [2024-11-28 09:11:13.333756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.240 [2024-11-28 09:11:13.333782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:19.240 [2024-11-28 09:11:13.333790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.514 ms 00:27:19.240 [2024-11-28 09:11:13.333805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.380 [2024-11-28 09:11:21.436938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.380 [2024-11-28 09:11:21.437179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:27.380 [2024-11-28 09:11:21.437205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8103.078 ms 00:27:27.380 [2024-11-28 09:11:21.437213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.380 [2024-11-28 09:11:21.438310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.380 [2024-11-28 09:11:21.438330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:27.380 [2024-11-28 09:11:21.438338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.081 ms 00:27:27.380 [2024-11-28 09:11:21.438344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.380 [2024-11-28 09:11:21.439211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.380 [2024-11-28 09:11:21.439230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:27.380 [2024-11-28 09:11:21.439238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.845 ms 00:27:27.380 [2024-11-28 09:11:21.439249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.380 [2024-11-28 09:11:21.440996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.380 [2024-11-28 09:11:21.441027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:27.380 [2024-11-28 09:11:21.441036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.714 ms 00:27:27.380 [2024-11-28 09:11:21.441043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.380 [2024-11-28 09:11:21.443641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.380 [2024-11-28 09:11:21.443688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:27.380 [2024-11-28 09:11:21.443699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.570 ms 00:27:27.380 [2024-11-28 09:11:21.443706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.380 [2024-11-28 09:11:21.443764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.380 [2024-11-28 09:11:21.443773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:27.380 [2024-11-28 09:11:21.443786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:27.380 [2024-11-28 09:11:21.443792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.380 [2024-11-28 09:11:21.445165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.380 [2024-11-28 09:11:21.445195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:27.380 [2024-11-28 09:11:21.445202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.344 ms 00:27:27.380 [2024-11-28 09:11:21.445208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.380 [2024-11-28 09:11:21.446541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.380 [2024-11-28 09:11:21.446570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:27.380 [2024-11-28 09:11:21.446577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.298 ms 00:27:27.380 [2024-11-28 09:11:21.446583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.380 [2024-11-28 09:11:21.447767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.380 [2024-11-28 09:11:21.447897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:27.380 [2024-11-28 09:11:21.447910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.159 ms 00:27:27.380 [2024-11-28 09:11:21.447916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.380 [2024-11-28 09:11:21.449018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.380 [2024-11-28 09:11:21.449044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:27.380 [2024-11-28 09:11:21.449051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.053 ms 00:27:27.380 [2024-11-28 09:11:21.449056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.380 [2024-11-28 09:11:21.449080] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:27.380 [2024-11-28 09:11:21.449091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:27.380 [2024-11-28 09:11:21.449100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:27.380 [2024-11-28 09:11:21.449106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:27.380 [2024-11-28 09:11:21.449113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:27.380 [2024-11-28 09:11:21.449119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:27.380 [2024-11-28 09:11:21.449126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:27.380 [2024-11-28 09:11:21.449132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:27.380 [2024-11-28 09:11:21.449138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:27.380 [2024-11-28 09:11:21.449145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:27.380 [2024-11-28 09:11:21.449152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:27.380 [2024-11-28 09:11:21.449158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:27.380 [2024-11-28 09:11:21.449163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:27.380 [2024-11-28 09:11:21.449169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:27.380 [2024-11-28 09:11:21.449175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:27.380 [2024-11-28 09:11:21.449181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:27.380 [2024-11-28 09:11:21.449188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:27.381 [2024-11-28 09:11:21.449193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:27.381 [2024-11-28 09:11:21.449200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:27.381 [2024-11-28 09:11:21.449208] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:27.381 [2024-11-28 09:11:21.449215] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 34e90906-802b-41ad-a111-9e2030352ea7 00:27:27.381 [2024-11-28 09:11:21.449221] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:27.381 [2024-11-28 09:11:21.449227] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:27.381 [2024-11-28 09:11:21.449233] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:27.381 [2024-11-28 09:11:21.449246] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:27.381 [2024-11-28 09:11:21.449252] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:27.381 [2024-11-28 09:11:21.449262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:27.381 [2024-11-28 09:11:21.449268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:27.381 [2024-11-28 09:11:21.449273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:27.381 [2024-11-28 09:11:21.449278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:27.381 [2024-11-28 09:11:21.449284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.381 [2024-11-28 09:11:21.449295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:27.381 [2024-11-28 09:11:21.449304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.205 ms 00:27:27.381 [2024-11-28 09:11:21.449310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.451117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.381 [2024-11-28 09:11:21.451138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:27.381 [2024-11-28 09:11:21.451147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.795 ms 00:27:27.381 [2024-11-28 09:11:21.451158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.451254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.381 [2024-11-28 09:11:21.451293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:27.381 [2024-11-28 09:11:21.451303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:27:27.381 [2024-11-28 09:11:21.451310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.457427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:27.381 [2024-11-28 09:11:21.457457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:27.381 [2024-11-28 09:11:21.457471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:27.381 [2024-11-28 09:11:21.457478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.457504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:27.381 [2024-11-28 09:11:21.457511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:27.381 [2024-11-28 09:11:21.457518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:27.381 [2024-11-28 09:11:21.457524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.457594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:27.381 [2024-11-28 09:11:21.457602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:27.381 [2024-11-28 09:11:21.457609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:27.381 [2024-11-28 09:11:21.457619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.457632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:27.381 [2024-11-28 09:11:21.457639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:27.381 [2024-11-28 09:11:21.457646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:27.381 [2024-11-28 09:11:21.457653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.468544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:27.381 [2024-11-28 09:11:21.468580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:27.381 [2024-11-28 09:11:21.468594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:27.381 [2024-11-28 09:11:21.468600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.477378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:27.381 [2024-11-28 09:11:21.477413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:27.381 [2024-11-28 09:11:21.477421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:27.381 [2024-11-28 09:11:21.477428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.477489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:27.381 [2024-11-28 09:11:21.477497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:27.381 [2024-11-28 09:11:21.477504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:27.381 [2024-11-28 09:11:21.477511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.477543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:27.381 [2024-11-28 09:11:21.477551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:27.381 [2024-11-28 09:11:21.477558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:27.381 [2024-11-28 09:11:21.477565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.477628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:27.381 [2024-11-28 09:11:21.477636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:27.381 [2024-11-28 09:11:21.477649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:27.381 [2024-11-28 09:11:21.477655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.477683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:27.381 [2024-11-28 09:11:21.477694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:27.381 [2024-11-28 09:11:21.477700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:27.381 [2024-11-28 09:11:21.477710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.477746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:27.381 [2024-11-28 09:11:21.477753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:27.381 [2024-11-28 09:11:21.477760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:27.381 [2024-11-28 09:11:21.477767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.477826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:27.381 [2024-11-28 09:11:21.477835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:27.381 [2024-11-28 09:11:21.477843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:27.381 [2024-11-28 09:11:21.477849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.381 [2024-11-28 09:11:21.477968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8144.748 ms, result 0 00:27:30.668 09:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:30.668 09:11:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:30.668 09:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:30.668 09:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:30.668 09:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:30.668 09:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92770 00:27:30.668 09:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:30.669 09:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92770 00:27:30.669 09:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92770 ']' 00:27:30.669 09:11:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:30.669 09:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.669 09:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:30.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.669 09:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.669 09:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:30.669 09:11:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:30.669 [2024-11-28 09:11:24.297727] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:30.669 [2024-11-28 09:11:24.298081] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92770 ] 00:27:30.669 [2024-11-28 09:11:24.448748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.669 [2024-11-28 09:11:24.513660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.927 [2024-11-28 09:11:24.809576] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:30.927 [2024-11-28 09:11:24.809811] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:30.927 [2024-11-28 09:11:24.948082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.927 [2024-11-28 09:11:24.948117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:30.927 [2024-11-28 09:11:24.948130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:30.927 [2024-11-28 09:11:24.948138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.927 [2024-11-28 09:11:24.948184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.927 [2024-11-28 09:11:24.948192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:30.927 [2024-11-28 09:11:24.948199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:30.927 [2024-11-28 09:11:24.948205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.927 [2024-11-28 09:11:24.948224] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:30.927 [2024-11-28 09:11:24.948406] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:30.927 [2024-11-28 09:11:24.948417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.927 [2024-11-28 09:11:24.948423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:30.927 [2024-11-28 09:11:24.948431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:27:30.927 [2024-11-28 09:11:24.948437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.927 [2024-11-28 09:11:24.949749] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:30.927 [2024-11-28 09:11:24.952147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.927 [2024-11-28 09:11:24.952267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:30.927 [2024-11-28 09:11:24.952281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.399 ms 00:27:30.927 [2024-11-28 09:11:24.952293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.927 [2024-11-28 09:11:24.952337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.927 [2024-11-28 09:11:24.952350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:30.927 [2024-11-28 09:11:24.952356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:30.927 [2024-11-28 09:11:24.952362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.927 [2024-11-28 09:11:24.958699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.927 [2024-11-28 09:11:24.958726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:30.927 [2024-11-28 09:11:24.958736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.290 ms 00:27:30.927 [2024-11-28 09:11:24.958742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.927 [2024-11-28 09:11:24.958777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.927 [2024-11-28 09:11:24.958784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:30.927 [2024-11-28 09:11:24.958791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:30.927 [2024-11-28 09:11:24.958810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.927 [2024-11-28 09:11:24.958849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.927 [2024-11-28 09:11:24.958856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:30.927 [2024-11-28 09:11:24.958868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:30.927 [2024-11-28 09:11:24.958876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.927 [2024-11-28 09:11:24.958893] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:30.927 [2024-11-28 09:11:24.960437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.927 [2024-11-28 09:11:24.960464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:30.927 [2024-11-28 09:11:24.960472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.548 ms 00:27:30.927 [2024-11-28 09:11:24.960478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.927 [2024-11-28 09:11:24.960504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.927 [2024-11-28 09:11:24.960515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:30.927 [2024-11-28 09:11:24.960521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:30.927 [2024-11-28 09:11:24.960532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.927 [2024-11-28 09:11:24.960555] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:30.927 [2024-11-28 09:11:24.960571] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:30.927 [2024-11-28 09:11:24.960601] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:30.927 [2024-11-28 09:11:24.960620] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:30.927 [2024-11-28 09:11:24.960704] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:30.927 [2024-11-28 09:11:24.960712] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:30.927 [2024-11-28 09:11:24.960723] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:30.928 [2024-11-28 09:11:24.960732] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:30.928 [2024-11-28 09:11:24.960739] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:30.928 [2024-11-28 09:11:24.960746] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:30.928 [2024-11-28 09:11:24.960752] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:30.928 [2024-11-28 09:11:24.960758] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:30.928 [2024-11-28 09:11:24.960763] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:30.928 [2024-11-28 09:11:24.960770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.928 [2024-11-28 09:11:24.960776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:30.928 [2024-11-28 09:11:24.960782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.217 ms 00:27:30.928 [2024-11-28 09:11:24.960788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.928 [2024-11-28 09:11:24.960875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.928 [2024-11-28 09:11:24.960883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:30.928 [2024-11-28 09:11:24.960892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:27:30.928 [2024-11-28 09:11:24.960898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.928 [2024-11-28 09:11:24.960979] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:30.928 [2024-11-28 09:11:24.960992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:30.928 [2024-11-28 09:11:24.960999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:30.928 [2024-11-28 09:11:24.961005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.928 [2024-11-28 09:11:24.961011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:30.928 [2024-11-28 09:11:24.961016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:30.928 [2024-11-28 09:11:24.961021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:30.928 [2024-11-28 09:11:24.961027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:30.928 [2024-11-28 09:11:24.961032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:30.928 [2024-11-28 09:11:24.961037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.928 [2024-11-28 09:11:24.961042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:30.928 [2024-11-28 09:11:24.961048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:30.928 [2024-11-28 09:11:24.961053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.928 [2024-11-28 09:11:24.961058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:30.928 [2024-11-28 09:11:24.961069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:30.928 [2024-11-28 09:11:24.961074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.928 [2024-11-28 09:11:24.961080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:30.928 [2024-11-28 09:11:24.961090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:30.928 [2024-11-28 09:11:24.961099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.928 [2024-11-28 09:11:24.961106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:30.928 [2024-11-28 09:11:24.961112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:30.928 [2024-11-28 09:11:24.961118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.928 [2024-11-28 09:11:24.961125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:30.928 [2024-11-28 09:11:24.961131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:30.928 [2024-11-28 09:11:24.961137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.928 [2024-11-28 09:11:24.961143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:30.928 [2024-11-28 09:11:24.961149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:30.928 [2024-11-28 09:11:24.961155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.928 [2024-11-28 09:11:24.961161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:30.928 [2024-11-28 09:11:24.961167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:30.928 [2024-11-28 09:11:24.961173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.928 [2024-11-28 09:11:24.961179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:30.928 [2024-11-28 09:11:24.961185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:30.928 [2024-11-28 09:11:24.961193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.928 [2024-11-28 09:11:24.961199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:30.928 [2024-11-28 09:11:24.961205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:30.928 [2024-11-28 09:11:24.961211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.928 [2024-11-28 09:11:24.961216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:30.928 [2024-11-28 09:11:24.961222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:30.928 [2024-11-28 09:11:24.961228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.928 [2024-11-28 09:11:24.961234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:30.928 [2024-11-28 09:11:24.961240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:30.928 [2024-11-28 09:11:24.961245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.928 [2024-11-28 09:11:24.961251] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:30.928 [2024-11-28 09:11:24.961260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:30.928 [2024-11-28 09:11:24.961267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:30.928 [2024-11-28 09:11:24.961273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.928 [2024-11-28 09:11:24.961280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:30.928 [2024-11-28 09:11:24.961286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:30.928 [2024-11-28 09:11:24.961294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:30.928 [2024-11-28 09:11:24.961303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:30.928 [2024-11-28 09:11:24.961309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:30.928 [2024-11-28 09:11:24.961315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:30.928 [2024-11-28 09:11:24.961323] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:30.928 [2024-11-28 09:11:24.961331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.928 [2024-11-28 09:11:24.961338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:30.928 [2024-11-28 09:11:24.961345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:30.928 [2024-11-28 09:11:24.961351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:30.928 [2024-11-28 09:11:24.961358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:30.928 [2024-11-28 09:11:24.961364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:30.928 [2024-11-28 09:11:24.961370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:30.928 [2024-11-28 09:11:24.961377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:30.928 [2024-11-28 09:11:24.961383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:30.928 [2024-11-28 09:11:24.961390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:30.928 [2024-11-28 09:11:24.961397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:30.928 [2024-11-28 09:11:24.961404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:30.928 [2024-11-28 09:11:24.961411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:30.928 [2024-11-28 09:11:24.961417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:30.928 [2024-11-28 09:11:24.961424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:30.928 [2024-11-28 09:11:24.961430] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:30.928 [2024-11-28 09:11:24.961437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.928 [2024-11-28 09:11:24.961445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:30.928 [2024-11-28 09:11:24.961452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:30.928 [2024-11-28 09:11:24.961458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:30.928 [2024-11-28 09:11:24.961464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:30.928 [2024-11-28 09:11:24.961470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.928 [2024-11-28 09:11:24.961476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:30.928 [2024-11-28 09:11:24.961483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.544 ms 00:27:30.928 [2024-11-28 09:11:24.961490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.928 [2024-11-28 09:11:24.961522] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:30.928 [2024-11-28 09:11:24.961530] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:35.134 [2024-11-28 09:11:28.862392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.862692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:35.134 [2024-11-28 09:11:28.862720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3900.850 ms 00:27:35.134 [2024-11-28 09:11:28.862733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.879102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.879155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:35.134 [2024-11-28 09:11:28.879171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.225 ms 00:27:35.134 [2024-11-28 09:11:28.879191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.879293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.879304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:35.134 [2024-11-28 09:11:28.879319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:35.134 [2024-11-28 09:11:28.879328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.900354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.900404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:35.134 [2024-11-28 09:11:28.900418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.984 ms 00:27:35.134 [2024-11-28 09:11:28.900427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.900472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.900481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:35.134 [2024-11-28 09:11:28.900491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:35.134 [2024-11-28 09:11:28.900499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.901060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.901093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:35.134 [2024-11-28 09:11:28.901105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.491 ms 00:27:35.134 [2024-11-28 09:11:28.901114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.901170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.901181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:35.134 [2024-11-28 09:11:28.901191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:35.134 [2024-11-28 09:11:28.901201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.909361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.909407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:35.134 [2024-11-28 09:11:28.909419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.135 ms 00:27:35.134 [2024-11-28 09:11:28.909430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.912600] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:35.134 [2024-11-28 09:11:28.912645] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:35.134 [2024-11-28 09:11:28.912660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.912671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:35.134 [2024-11-28 09:11:28.912681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.126 ms 00:27:35.134 [2024-11-28 09:11:28.912691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.917656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.917694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:35.134 [2024-11-28 09:11:28.917713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.915 ms 00:27:35.134 [2024-11-28 09:11:28.917723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.919316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.919462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:35.134 [2024-11-28 09:11:28.919478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.543 ms 00:27:35.134 [2024-11-28 09:11:28.919487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.921038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.921072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:35.134 [2024-11-28 09:11:28.921082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.515 ms 00:27:35.134 [2024-11-28 09:11:28.921090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.921430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.921448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:35.134 [2024-11-28 09:11:28.921458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.258 ms 00:27:35.134 [2024-11-28 09:11:28.921465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.940186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.940228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:35.134 [2024-11-28 09:11:28.940245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.702 ms 00:27:35.134 [2024-11-28 09:11:28.940253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.947704] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:35.134 [2024-11-28 09:11:28.948465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.948494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:35.134 [2024-11-28 09:11:28.948505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.158 ms 00:27:35.134 [2024-11-28 09:11:28.948517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.948599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.948611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:35.134 [2024-11-28 09:11:28.948620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:35.134 [2024-11-28 09:11:28.948628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.948692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.948702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:35.134 [2024-11-28 09:11:28.948711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:35.134 [2024-11-28 09:11:28.948719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.948746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.948755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:35.134 [2024-11-28 09:11:28.948764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:35.134 [2024-11-28 09:11:28.948771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.134 [2024-11-28 09:11:28.948820] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:35.134 [2024-11-28 09:11:28.948832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.134 [2024-11-28 09:11:28.948840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:35.135 [2024-11-28 09:11:28.948853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:35.135 [2024-11-28 09:11:28.948861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.135 [2024-11-28 09:11:28.952101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.135 [2024-11-28 09:11:28.952140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:35.135 [2024-11-28 09:11:28.952149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.220 ms 00:27:35.135 [2024-11-28 09:11:28.952158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.135 [2024-11-28 09:11:28.952235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.135 [2024-11-28 09:11:28.952246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:35.135 [2024-11-28 09:11:28.952254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:35.135 [2024-11-28 09:11:28.952263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.135 [2024-11-28 09:11:28.953358] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4004.737 ms, result 0 00:27:35.135 [2024-11-28 09:11:28.969019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.135 [2024-11-28 09:11:28.984993] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:35.135 [2024-11-28 09:11:28.993110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:35.135 09:11:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:35.135 09:11:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:35.135 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:35.135 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:35.135 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:35.135 [2024-11-28 09:11:29.229195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.135 [2024-11-28 09:11:29.229252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:35.135 [2024-11-28 09:11:29.229271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:35.135 [2024-11-28 09:11:29.229281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.135 [2024-11-28 09:11:29.229310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.135 [2024-11-28 09:11:29.229320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:35.135 [2024-11-28 09:11:29.229329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:35.135 [2024-11-28 09:11:29.229339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.135 [2024-11-28 09:11:29.229364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.135 [2024-11-28 09:11:29.229374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:35.135 [2024-11-28 09:11:29.229383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:35.135 [2024-11-28 09:11:29.229392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.135 [2024-11-28 09:11:29.229452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.253 ms, result 0 00:27:35.135 true 00:27:35.135 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.396 { 00:27:35.396 "name": "ftl", 00:27:35.396 "properties": [ 00:27:35.396 { 00:27:35.396 "name": "superblock_version", 00:27:35.396 "value": 5, 00:27:35.396 "read-only": true 00:27:35.396 }, 00:27:35.396 { 00:27:35.396 "name": "base_device", 00:27:35.396 "bands": [ 00:27:35.396 { 00:27:35.396 "id": 0, 00:27:35.396 "state": "CLOSED", 00:27:35.396 "validity": 1.0 00:27:35.396 }, 00:27:35.397 { 00:27:35.397 "id": 1, 00:27:35.397 "state": "CLOSED", 00:27:35.397 "validity": 1.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 2, 00:27:35.397 "state": "CLOSED", 00:27:35.397 "validity": 0.007843137254901933 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 3, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 4, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 5, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 6, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 7, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 8, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 9, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 10, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 11, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 12, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 13, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 14, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 15, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 16, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 17, 00:27:35.397 "state": "FREE", 00:27:35.397 "validity": 0.0 00:27:35.397 } 00:27:35.397 ], 00:27:35.397 "read-only": true 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "name": "cache_device", 00:27:35.397 "type": "bdev", 00:27:35.397 "chunks": [ 00:27:35.397 { 00:27:35.397 "id": 0, 00:27:35.397 "state": "INACTIVE", 00:27:35.397 "utilization": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 1, 00:27:35.397 "state": "OPEN", 00:27:35.397 "utilization": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 2, 00:27:35.397 "state": "OPEN", 00:27:35.397 "utilization": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 3, 00:27:35.397 "state": "FREE", 00:27:35.397 "utilization": 0.0 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "id": 4, 00:27:35.397 "state": "FREE", 00:27:35.397 "utilization": 0.0 00:27:35.397 } 00:27:35.397 ], 00:27:35.397 "read-only": true 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "name": "verbose_mode", 00:27:35.397 "value": true, 00:27:35.397 "unit": "", 00:27:35.397 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "name": "prep_upgrade_on_shutdown", 00:27:35.397 "value": false, 00:27:35.397 "unit": "", 00:27:35.397 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:35.397 } 00:27:35.397 ] 00:27:35.397 } 00:27:35.397 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:35.397 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.397 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:35.658 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:35.658 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:35.658 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:35.658 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:35.658 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.919 Validate MD5 checksum, iteration 1 00:27:35.919 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:35.919 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:35.919 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:35.919 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:35.919 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:35.919 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:35.919 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:35.919 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:35.919 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:35.920 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:35.920 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:35.920 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:35.920 09:11:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:35.920 [2024-11-28 09:11:30.003280] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:35.920 [2024-11-28 09:11:30.003631] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92839 ] 00:27:36.182 [2024-11-28 09:11:30.151295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.182 [2024-11-28 09:11:30.203144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.567  [2024-11-28T09:11:32.629Z] Copying: 572/1024 [MB] (572 MBps) [2024-11-28T09:11:33.200Z] Copying: 1024/1024 [MB] (average 595 MBps) 00:27:39.080 00:27:39.080 09:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:39.080 09:11:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:41.624 09:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:41.624 09:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4680b977e119e6348c21256fdc853eeb 00:27:41.624 09:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4680b977e119e6348c21256fdc853eeb != \4\6\8\0\b\9\7\7\e\1\1\9\e\6\3\4\8\c\2\1\2\5\6\f\d\c\8\5\3\e\e\b ]] 00:27:41.624 09:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:41.624 09:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:41.624 09:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:41.624 Validate MD5 checksum, iteration 2 00:27:41.624 09:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:41.624 09:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:41.624 09:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:41.624 09:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:41.624 09:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:41.624 09:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:41.624 [2024-11-28 09:11:35.380276] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:41.624 [2024-11-28 09:11:35.380386] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92910 ] 00:27:41.624 [2024-11-28 09:11:35.528346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.624 [2024-11-28 09:11:35.560654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.008  [2024-11-28T09:11:37.697Z] Copying: 540/1024 [MB] (540 MBps) [2024-11-28T09:11:39.607Z] Copying: 1024/1024 [MB] (average 595 MBps) 00:27:45.487 00:27:45.487 09:11:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:45.487 09:11:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9b9841e161139a3f4d80d6211a116899 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9b9841e161139a3f4d80d6211a116899 != \9\b\9\8\4\1\e\1\6\1\1\3\9\a\3\f\4\d\8\0\d\6\2\1\1\a\1\1\6\8\9\9 ]] 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92770 ]] 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92770 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92972 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92972 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92972 ']' 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:47.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:47.459 09:11:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:47.459 [2024-11-28 09:11:41.176087] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:47.459 [2024-11-28 09:11:41.176209] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92972 ] 00:27:47.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 92770 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:47.459 [2024-11-28 09:11:41.323883] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.459 [2024-11-28 09:11:41.363344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.719 [2024-11-28 09:11:41.652316] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:47.719 [2024-11-28 09:11:41.652364] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:47.719 [2024-11-28 09:11:41.790496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.719 [2024-11-28 09:11:41.790527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:47.719 [2024-11-28 09:11:41.790538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:47.719 [2024-11-28 09:11:41.790546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.719 [2024-11-28 09:11:41.790590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.719 [2024-11-28 09:11:41.790598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:47.719 [2024-11-28 09:11:41.790607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:47.719 [2024-11-28 09:11:41.790613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.719 [2024-11-28 09:11:41.790632] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:47.719 [2024-11-28 09:11:41.790820] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:47.719 [2024-11-28 09:11:41.790832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.719 [2024-11-28 09:11:41.790866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:47.719 [2024-11-28 09:11:41.790874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:27:47.719 [2024-11-28 09:11:41.790881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.719 [2024-11-28 09:11:41.791312] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:47.719 [2024-11-28 09:11:41.795121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.719 [2024-11-28 09:11:41.795146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:47.719 [2024-11-28 09:11:41.795155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.816 ms 00:27:47.719 [2024-11-28 09:11:41.795166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.719 [2024-11-28 09:11:41.796049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.719 [2024-11-28 09:11:41.796064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:47.719 [2024-11-28 09:11:41.796072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:47.719 [2024-11-28 09:11:41.796078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.719 [2024-11-28 09:11:41.796294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.719 [2024-11-28 09:11:41.796302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:47.719 [2024-11-28 09:11:41.796311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.168 ms 00:27:47.719 [2024-11-28 09:11:41.796317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.719 [2024-11-28 09:11:41.796350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.719 [2024-11-28 09:11:41.796357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:47.719 [2024-11-28 09:11:41.796366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:47.719 [2024-11-28 09:11:41.796372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.719 [2024-11-28 09:11:41.796395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.719 [2024-11-28 09:11:41.796402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:47.719 [2024-11-28 09:11:41.796409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:47.719 [2024-11-28 09:11:41.796416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.719 [2024-11-28 09:11:41.796435] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:47.719 [2024-11-28 09:11:41.797115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.719 [2024-11-28 09:11:41.797133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:47.719 [2024-11-28 09:11:41.797141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.686 ms 00:27:47.719 [2024-11-28 09:11:41.797147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.719 [2024-11-28 09:11:41.797166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.719 [2024-11-28 09:11:41.797176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:47.719 [2024-11-28 09:11:41.797182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:47.719 [2024-11-28 09:11:41.797191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.719 [2024-11-28 09:11:41.797207] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:47.719 [2024-11-28 09:11:41.797223] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:47.719 [2024-11-28 09:11:41.797252] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:47.719 [2024-11-28 09:11:41.797266] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:47.719 [2024-11-28 09:11:41.797348] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:47.719 [2024-11-28 09:11:41.797357] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:47.719 [2024-11-28 09:11:41.797369] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:47.719 [2024-11-28 09:11:41.797377] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:47.719 [2024-11-28 09:11:41.797384] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:47.719 [2024-11-28 09:11:41.797390] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:47.719 [2024-11-28 09:11:41.797396] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:47.719 [2024-11-28 09:11:41.797401] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:47.719 [2024-11-28 09:11:41.797407] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:47.719 [2024-11-28 09:11:41.797413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.719 [2024-11-28 09:11:41.797419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:47.719 [2024-11-28 09:11:41.797426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:27:47.719 [2024-11-28 09:11:41.797432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.719 [2024-11-28 09:11:41.797503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.719 [2024-11-28 09:11:41.797509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:47.719 [2024-11-28 09:11:41.797518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:47.719 [2024-11-28 09:11:41.797526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.719 [2024-11-28 09:11:41.797631] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:47.719 [2024-11-28 09:11:41.797640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:47.719 [2024-11-28 09:11:41.797646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:47.719 [2024-11-28 09:11:41.797653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.719 [2024-11-28 09:11:41.797659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:47.719 [2024-11-28 09:11:41.797664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:47.719 [2024-11-28 09:11:41.797670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:47.719 [2024-11-28 09:11:41.797675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:47.719 [2024-11-28 09:11:41.797681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:47.719 [2024-11-28 09:11:41.797687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.720 [2024-11-28 09:11:41.797693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:47.720 [2024-11-28 09:11:41.797698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:47.720 [2024-11-28 09:11:41.797709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.720 [2024-11-28 09:11:41.797714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:47.720 [2024-11-28 09:11:41.797719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:47.720 [2024-11-28 09:11:41.797724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.720 [2024-11-28 09:11:41.797732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:47.720 [2024-11-28 09:11:41.797737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:47.720 [2024-11-28 09:11:41.797742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.720 [2024-11-28 09:11:41.797747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:47.720 [2024-11-28 09:11:41.797753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:47.720 [2024-11-28 09:11:41.797757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.720 [2024-11-28 09:11:41.797762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:47.720 [2024-11-28 09:11:41.797768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:47.720 [2024-11-28 09:11:41.797774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.720 [2024-11-28 09:11:41.797780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:47.720 [2024-11-28 09:11:41.797786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:47.720 [2024-11-28 09:11:41.797791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.720 [2024-11-28 09:11:41.797812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:47.720 [2024-11-28 09:11:41.797819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:47.720 [2024-11-28 09:11:41.797825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.720 [2024-11-28 09:11:41.797831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:47.720 [2024-11-28 09:11:41.797842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:47.720 [2024-11-28 09:11:41.797847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.720 [2024-11-28 09:11:41.797854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:47.720 [2024-11-28 09:11:41.797860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:47.720 [2024-11-28 09:11:41.797866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.720 [2024-11-28 09:11:41.797872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:47.720 [2024-11-28 09:11:41.797877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:47.720 [2024-11-28 09:11:41.797883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.720 [2024-11-28 09:11:41.797889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:47.720 [2024-11-28 09:11:41.797895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:47.720 [2024-11-28 09:11:41.797900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.720 [2024-11-28 09:11:41.797906] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:47.720 [2024-11-28 09:11:41.797915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:47.720 [2024-11-28 09:11:41.797925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:47.720 [2024-11-28 09:11:41.797931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.720 [2024-11-28 09:11:41.797937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:47.720 [2024-11-28 09:11:41.797945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:47.720 [2024-11-28 09:11:41.797951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:47.720 [2024-11-28 09:11:41.797957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:47.720 [2024-11-28 09:11:41.797963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:47.720 [2024-11-28 09:11:41.797970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:47.720 [2024-11-28 09:11:41.797977] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:47.720 [2024-11-28 09:11:41.797985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:47.720 [2024-11-28 09:11:41.797992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:47.720 [2024-11-28 09:11:41.797998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:47.720 [2024-11-28 09:11:41.798005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:47.720 [2024-11-28 09:11:41.798011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:47.720 [2024-11-28 09:11:41.798017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:47.720 [2024-11-28 09:11:41.798025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:47.720 [2024-11-28 09:11:41.798032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:47.720 [2024-11-28 09:11:41.798038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:47.720 [2024-11-28 09:11:41.798045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:47.720 [2024-11-28 09:11:41.798053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:47.720 [2024-11-28 09:11:41.798059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:47.720 [2024-11-28 09:11:41.798065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:47.720 [2024-11-28 09:11:41.798072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:47.720 [2024-11-28 09:11:41.798078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:47.720 [2024-11-28 09:11:41.798084] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:47.720 [2024-11-28 09:11:41.798096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:47.720 [2024-11-28 09:11:41.798103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:47.720 [2024-11-28 09:11:41.798109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:47.720 [2024-11-28 09:11:41.798116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:47.720 [2024-11-28 09:11:41.798122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:47.720 [2024-11-28 09:11:41.798128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.720 [2024-11-28 09:11:41.798134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:47.720 [2024-11-28 09:11:41.798140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.569 ms 00:27:47.720 [2024-11-28 09:11:41.798146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.720 [2024-11-28 09:11:41.806424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.720 [2024-11-28 09:11:41.806520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:47.720 [2024-11-28 09:11:41.806570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.234 ms 00:27:47.720 [2024-11-28 09:11:41.806588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.720 [2024-11-28 09:11:41.806628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.720 [2024-11-28 09:11:41.806754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:47.720 [2024-11-28 09:11:41.806784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:47.720 [2024-11-28 09:11:41.806816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.720 [2024-11-28 09:11:41.826278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.720 [2024-11-28 09:11:41.826401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:47.720 [2024-11-28 09:11:41.826460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.407 ms 00:27:47.720 [2024-11-28 09:11:41.826486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.720 [2024-11-28 09:11:41.826546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.720 [2024-11-28 09:11:41.826576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:47.720 [2024-11-28 09:11:41.826599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:47.720 [2024-11-28 09:11:41.826620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.720 [2024-11-28 09:11:41.826753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.720 [2024-11-28 09:11:41.826819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:47.720 [2024-11-28 09:11:41.826845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:47.720 [2024-11-28 09:11:41.826867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.720 [2024-11-28 09:11:41.826933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.720 [2024-11-28 09:11:41.826958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:47.720 [2024-11-28 09:11:41.827023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:47.720 [2024-11-28 09:11:41.827049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.720 [2024-11-28 09:11:41.833965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.720 [2024-11-28 09:11:41.834061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:47.720 [2024-11-28 09:11:41.834115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.878 ms 00:27:47.720 [2024-11-28 09:11:41.834141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.720 [2024-11-28 09:11:41.834254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.720 [2024-11-28 09:11:41.834551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:47.720 [2024-11-28 09:11:41.834654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:47.720 [2024-11-28 09:11:41.834684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.978 [2024-11-28 09:11:41.839084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.978 [2024-11-28 09:11:41.839190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:47.978 [2024-11-28 09:11:41.839246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.317 ms 00:27:47.978 [2024-11-28 09:11:41.839279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.978 [2024-11-28 09:11:41.840733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.978 [2024-11-28 09:11:41.840840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:47.978 [2024-11-28 09:11:41.840903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.327 ms 00:27:47.978 [2024-11-28 09:11:41.840927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.978 [2024-11-28 09:11:41.856374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.978 [2024-11-28 09:11:41.856471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:47.978 [2024-11-28 09:11:41.856486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.385 ms 00:27:47.978 [2024-11-28 09:11:41.856493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.978 [2024-11-28 09:11:41.856614] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:47.978 [2024-11-28 09:11:41.856704] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:47.978 [2024-11-28 09:11:41.856789] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:47.978 [2024-11-28 09:11:41.856892] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:47.978 [2024-11-28 09:11:41.856900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.978 [2024-11-28 09:11:41.856907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:47.978 [2024-11-28 09:11:41.856915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.374 ms 00:27:47.978 [2024-11-28 09:11:41.856921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.978 [2024-11-28 09:11:41.856952] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:47.978 [2024-11-28 09:11:41.856964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.978 [2024-11-28 09:11:41.856970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:47.978 [2024-11-28 09:11:41.856977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:47.978 [2024-11-28 09:11:41.856984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.978 [2024-11-28 09:11:41.859251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.978 [2024-11-28 09:11:41.859336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:47.978 [2024-11-28 09:11:41.859381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.246 ms 00:27:47.978 [2024-11-28 09:11:41.859401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.978 [2024-11-28 09:11:41.860009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.978 [2024-11-28 09:11:41.860080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:47.978 [2024-11-28 09:11:41.860118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:47.978 [2024-11-28 09:11:41.860141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.978 [2024-11-28 09:11:41.860216] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:47.978 [2024-11-28 09:11:41.860401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.978 [2024-11-28 09:11:41.860424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:47.978 [2024-11-28 09:11:41.860472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:27:47.978 [2024-11-28 09:11:41.860494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.549 [2024-11-28 09:11:42.393555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.549 [2024-11-28 09:11:42.393811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:48.549 [2024-11-28 09:11:42.393884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 532.771 ms 00:27:48.549 [2024-11-28 09:11:42.393909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.549 [2024-11-28 09:11:42.395289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.549 [2024-11-28 09:11:42.395396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:48.549 [2024-11-28 09:11:42.395457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.937 ms 00:27:48.549 [2024-11-28 09:11:42.395485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.549 [2024-11-28 09:11:42.395904] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:48.549 [2024-11-28 09:11:42.395963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.549 [2024-11-28 09:11:42.396028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:48.549 [2024-11-28 09:11:42.396091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.431 ms 00:27:48.549 [2024-11-28 09:11:42.396113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.549 [2024-11-28 09:11:42.396180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.549 [2024-11-28 09:11:42.396205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:48.549 [2024-11-28 09:11:42.396225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:48.549 [2024-11-28 09:11:42.396250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:48.549 [2024-11-28 09:11:42.396314] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 536.079 ms, result 0 00:27:48.549 [2024-11-28 09:11:42.396460] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:48.549 [2024-11-28 09:11:42.396550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:48.550 [2024-11-28 09:11:42.396627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:48.550 [2024-11-28 09:11:42.396653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.091 ms 00:27:48.550 [2024-11-28 09:11:42.396671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.121 [2024-11-28 09:11:43.024407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.121 [2024-11-28 09:11:43.024566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:49.121 [2024-11-28 09:11:43.024629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 627.293 ms 00:27:49.121 [2024-11-28 09:11:43.024653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.121 [2024-11-28 09:11:43.026442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.121 [2024-11-28 09:11:43.026531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:49.121 [2024-11-28 09:11:43.026585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.345 ms 00:27:49.121 [2024-11-28 09:11:43.026608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.121 [2024-11-28 09:11:43.027146] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:49.121 [2024-11-28 09:11:43.027250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.121 [2024-11-28 09:11:43.027299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:49.121 [2024-11-28 09:11:43.027323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.603 ms 00:27:49.121 [2024-11-28 09:11:43.027342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.121 [2024-11-28 09:11:43.027383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.121 [2024-11-28 09:11:43.027405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:49.121 [2024-11-28 09:11:43.027430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:49.121 [2024-11-28 09:11:43.027451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.121 [2024-11-28 09:11:43.027502] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 631.038 ms, result 0 00:27:49.121 [2024-11-28 09:11:43.027601] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:49.121 [2024-11-28 09:11:43.027645] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:49.121 [2024-11-28 09:11:43.027676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.121 [2024-11-28 09:11:43.027696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:49.121 [2024-11-28 09:11:43.027716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1167.475 ms 00:27:49.121 [2024-11-28 09:11:43.027821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.121 [2024-11-28 09:11:43.027865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.121 [2024-11-28 09:11:43.027890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:49.121 [2024-11-28 09:11:43.027910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:49.121 [2024-11-28 09:11:43.027928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.121 [2024-11-28 09:11:43.036293] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:49.121 [2024-11-28 09:11:43.036486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.121 [2024-11-28 09:11:43.036518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:49.121 [2024-11-28 09:11:43.036576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.484 ms 00:27:49.121 [2024-11-28 09:11:43.036599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.121 [2024-11-28 09:11:43.037311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.121 [2024-11-28 09:11:43.037392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:49.121 [2024-11-28 09:11:43.037440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.627 ms 00:27:50.064 [2024-11-28 09:11:43.990706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.064 [2024-11-28 09:11:43.997086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.064 [2024-11-28 09:11:43.997145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:50.064 [2024-11-28 09:11:43.997170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.818 ms 00:27:50.064 [2024-11-28 09:11:43.997205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.064 [2024-11-28 09:11:43.997451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.064 [2024-11-28 09:11:43.997495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:50.064 [2024-11-28 09:11:43.997515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:50.064 [2024-11-28 09:11:43.997530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.064 [2024-11-28 09:11:43.997787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.064 [2024-11-28 09:11:43.997856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:50.064 [2024-11-28 09:11:43.997875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:50.064 [2024-11-28 09:11:43.997890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.064 [2024-11-28 09:11:43.997938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.064 [2024-11-28 09:11:43.998012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:50.064 [2024-11-28 09:11:43.998040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:50.064 [2024-11-28 09:11:43.998056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.064 [2024-11-28 09:11:43.998118] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:50.064 [2024-11-28 09:11:43.998164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.064 [2024-11-28 09:11:43.998179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:50.064 [2024-11-28 09:11:43.998196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:27:50.064 [2024-11-28 09:11:43.998210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.064 [2024-11-28 09:11:43.998319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.064 [2024-11-28 09:11:43.998341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:50.064 [2024-11-28 09:11:43.998357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:27:50.064 [2024-11-28 09:11:43.998371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.064 [2024-11-28 09:11:44.000911] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2209.562 ms, result 0 00:27:50.064 [2024-11-28 09:11:44.013718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.064 [2024-11-28 09:11:44.029719] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:50.064 [2024-11-28 09:11:44.037838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:50.064 Validate MD5 checksum, iteration 1 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:50.064 09:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:50.065 [2024-11-28 09:11:44.129432] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:50.065 [2024-11-28 09:11:44.129701] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93007 ] 00:27:50.326 [2024-11-28 09:11:44.277795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.326 [2024-11-28 09:11:44.316597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.713  [2024-11-28T09:11:46.775Z] Copying: 512/1024 [MB] (512 MBps) [2024-11-28T09:11:46.775Z] Copying: 1002/1024 [MB] (490 MBps) [2024-11-28T09:11:47.714Z] Copying: 1024/1024 [MB] (average 500 MBps) 00:27:53.594 00:27:53.594 09:11:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:53.595 09:11:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:55.496 09:11:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:55.496 09:11:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4680b977e119e6348c21256fdc853eeb 00:27:55.496 Validate MD5 checksum, iteration 2 00:27:55.496 09:11:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4680b977e119e6348c21256fdc853eeb != \4\6\8\0\b\9\7\7\e\1\1\9\e\6\3\4\8\c\2\1\2\5\6\f\d\c\8\5\3\e\e\b ]] 00:27:55.496 09:11:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:55.496 09:11:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:55.496 09:11:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:55.496 09:11:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:55.496 09:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:55.496 09:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:55.496 09:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:55.496 09:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:55.496 09:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:55.496 [2024-11-28 09:11:49.182102] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:55.496 [2024-11-28 09:11:49.182354] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93069 ] 00:27:55.496 [2024-11-28 09:11:49.328195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.496 [2024-11-28 09:11:49.357182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.881  [2024-11-28T09:11:51.261Z] Copying: 678/1024 [MB] (678 MBps) [2024-11-28T09:11:51.832Z] Copying: 1024/1024 [MB] (average 668 MBps) 00:27:57.712 00:27:57.713 09:11:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:57.713 09:11:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9b9841e161139a3f4d80d6211a116899 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9b9841e161139a3f4d80d6211a116899 != \9\b\9\8\4\1\e\1\6\1\1\3\9\a\3\f\4\d\8\0\d\6\2\1\1\a\1\1\6\8\9\9 ]] 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92972 ]] 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92972 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92972 ']' 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92972 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92972 00:27:59.625 killing process with pid 92972 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92972' 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92972 00:27:59.625 09:11:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92972 00:27:59.625 [2024-11-28 09:11:53.733670] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:59.625 [2024-11-28 09:11:53.739088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.625 [2024-11-28 09:11:53.739120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:59.625 [2024-11-28 09:11:53.739130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:59.625 [2024-11-28 09:11:53.739136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.625 [2024-11-28 09:11:53.739152] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:59.625 [2024-11-28 09:11:53.739525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.625 [2024-11-28 09:11:53.739546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:59.625 [2024-11-28 09:11:53.739553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.364 ms 00:27:59.625 [2024-11-28 09:11:53.739559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.625 [2024-11-28 09:11:53.739740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.625 [2024-11-28 09:11:53.739748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:59.625 [2024-11-28 09:11:53.739755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.162 ms 00:27:59.625 [2024-11-28 09:11:53.739761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.625 [2024-11-28 09:11:53.741178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.625 [2024-11-28 09:11:53.741212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:59.625 [2024-11-28 09:11:53.741220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.400 ms 00:27:59.625 [2024-11-28 09:11:53.741225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.625 [2024-11-28 09:11:53.742105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.625 [2024-11-28 09:11:53.742210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:59.625 [2024-11-28 09:11:53.742226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.852 ms 00:27:59.625 [2024-11-28 09:11:53.742233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.887 [2024-11-28 09:11:53.743958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.887 [2024-11-28 09:11:53.743984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:59.887 [2024-11-28 09:11:53.743993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.685 ms 00:27:59.887 [2024-11-28 09:11:53.743999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.887 [2024-11-28 09:11:53.745043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.887 [2024-11-28 09:11:53.745071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:59.887 [2024-11-28 09:11:53.745078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.015 ms 00:27:59.887 [2024-11-28 09:11:53.745084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.887 [2024-11-28 09:11:53.745143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.887 [2024-11-28 09:11:53.745150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:59.887 [2024-11-28 09:11:53.745156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:59.887 [2024-11-28 09:11:53.745162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.887 [2024-11-28 09:11:53.746678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.887 [2024-11-28 09:11:53.746767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:59.887 [2024-11-28 09:11:53.746826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.503 ms 00:27:59.887 [2024-11-28 09:11:53.746872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.887 [2024-11-28 09:11:53.748344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.887 [2024-11-28 09:11:53.748429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:59.887 [2024-11-28 09:11:53.748468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.412 ms 00:27:59.888 [2024-11-28 09:11:53.748485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.749689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.888 [2024-11-28 09:11:53.749777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:59.888 [2024-11-28 09:11:53.749829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.172 ms 00:27:59.888 [2024-11-28 09:11:53.749847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.751027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.888 [2024-11-28 09:11:53.751109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:59.888 [2024-11-28 09:11:53.751147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.111 ms 00:27:59.888 [2024-11-28 09:11:53.751164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.751194] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:59.888 [2024-11-28 09:11:53.751236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:59.888 [2024-11-28 09:11:53.751307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:59.888 [2024-11-28 09:11:53.751330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:59.888 [2024-11-28 09:11:53.751353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:59.888 [2024-11-28 09:11:53.751569] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:59.888 [2024-11-28 09:11:53.751575] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 34e90906-802b-41ad-a111-9e2030352ea7 00:27:59.888 [2024-11-28 09:11:53.751582] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:59.888 [2024-11-28 09:11:53.751587] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:59.888 [2024-11-28 09:11:53.751593] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:59.888 [2024-11-28 09:11:53.751598] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:59.888 [2024-11-28 09:11:53.751604] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:59.888 [2024-11-28 09:11:53.751611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:59.888 [2024-11-28 09:11:53.751616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:59.888 [2024-11-28 09:11:53.751621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:59.888 [2024-11-28 09:11:53.751626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:59.888 [2024-11-28 09:11:53.751631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.888 [2024-11-28 09:11:53.751637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:59.888 [2024-11-28 09:11:53.751648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.438 ms 00:27:59.888 [2024-11-28 09:11:53.751656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.752927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.888 [2024-11-28 09:11:53.752950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:59.888 [2024-11-28 09:11:53.752957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.256 ms 00:27:59.888 [2024-11-28 09:11:53.752964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.753030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.888 [2024-11-28 09:11:53.753036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:59.888 [2024-11-28 09:11:53.753046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:59.888 [2024-11-28 09:11:53.753051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.757512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.888 [2024-11-28 09:11:53.757538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:59.888 [2024-11-28 09:11:53.757545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.888 [2024-11-28 09:11:53.757551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.757573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.888 [2024-11-28 09:11:53.757579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:59.888 [2024-11-28 09:11:53.757606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.888 [2024-11-28 09:11:53.757612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.757665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.888 [2024-11-28 09:11:53.757673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:59.888 [2024-11-28 09:11:53.757680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.888 [2024-11-28 09:11:53.757686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.757699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.888 [2024-11-28 09:11:53.757705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:59.888 [2024-11-28 09:11:53.757711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.888 [2024-11-28 09:11:53.757719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.765301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.888 [2024-11-28 09:11:53.765340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:59.888 [2024-11-28 09:11:53.765349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.888 [2024-11-28 09:11:53.765355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.771488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.888 [2024-11-28 09:11:53.771528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:59.888 [2024-11-28 09:11:53.771538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.888 [2024-11-28 09:11:53.771544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.771576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.888 [2024-11-28 09:11:53.771583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:59.888 [2024-11-28 09:11:53.771589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.888 [2024-11-28 09:11:53.771594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.771634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.888 [2024-11-28 09:11:53.771641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:59.888 [2024-11-28 09:11:53.771650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.888 [2024-11-28 09:11:53.771656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.771710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.888 [2024-11-28 09:11:53.771718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:59.888 [2024-11-28 09:11:53.771725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.888 [2024-11-28 09:11:53.771730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.771753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.888 [2024-11-28 09:11:53.771760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:59.888 [2024-11-28 09:11:53.771766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.888 [2024-11-28 09:11:53.771772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.771820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.888 [2024-11-28 09:11:53.771828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:59.888 [2024-11-28 09:11:53.771834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.888 [2024-11-28 09:11:53.771841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.771876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.888 [2024-11-28 09:11:53.771885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:59.888 [2024-11-28 09:11:53.771891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.888 [2024-11-28 09:11:53.771897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.888 [2024-11-28 09:11:53.771992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 32.882 ms, result 0 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:00.150 Remove shared memory files 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92770 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:00.150 ************************************ 00:28:00.150 END TEST ftl_upgrade_shutdown 00:28:00.150 ************************************ 00:28:00.150 00:28:00.150 real 1m11.985s 00:28:00.150 user 1m34.931s 00:28:00.150 sys 0m19.723s 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:00.150 09:11:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:00.150 09:11:54 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:00.150 09:11:54 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:00.150 09:11:54 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:28:00.151 09:11:54 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:00.151 09:11:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:00.151 ************************************ 00:28:00.151 START TEST ftl_restore_fast 00:28:00.151 ************************************ 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:00.151 * Looking for test storage... 00:28:00.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:00.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.151 --rc genhtml_branch_coverage=1 00:28:00.151 --rc genhtml_function_coverage=1 00:28:00.151 --rc genhtml_legend=1 00:28:00.151 --rc geninfo_all_blocks=1 00:28:00.151 --rc geninfo_unexecuted_blocks=1 00:28:00.151 00:28:00.151 ' 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:00.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.151 --rc genhtml_branch_coverage=1 00:28:00.151 --rc genhtml_function_coverage=1 00:28:00.151 --rc genhtml_legend=1 00:28:00.151 --rc geninfo_all_blocks=1 00:28:00.151 --rc geninfo_unexecuted_blocks=1 00:28:00.151 00:28:00.151 ' 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:00.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.151 --rc genhtml_branch_coverage=1 00:28:00.151 --rc genhtml_function_coverage=1 00:28:00.151 --rc genhtml_legend=1 00:28:00.151 --rc geninfo_all_blocks=1 00:28:00.151 --rc geninfo_unexecuted_blocks=1 00:28:00.151 00:28:00.151 ' 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:00.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.151 --rc genhtml_branch_coverage=1 00:28:00.151 --rc genhtml_function_coverage=1 00:28:00.151 --rc genhtml_legend=1 00:28:00.151 --rc geninfo_all_blocks=1 00:28:00.151 --rc geninfo_unexecuted_blocks=1 00:28:00.151 00:28:00.151 ' 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.qktpys1lDQ 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=93197 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 93197 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 93197 ']' 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:00.151 09:11:54 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:00.413 [2024-11-28 09:11:54.342470] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:00.413 [2024-11-28 09:11:54.342738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93197 ] 00:28:00.413 [2024-11-28 09:11:54.489017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.674 [2024-11-28 09:11:54.536007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:01.246 09:11:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:01.507 09:11:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:01.507 { 00:28:01.507 "name": "nvme0n1", 00:28:01.507 "aliases": [ 00:28:01.507 "8e582add-5d93-4836-9fac-76f4cbba3915" 00:28:01.507 ], 00:28:01.507 "product_name": "NVMe disk", 00:28:01.507 "block_size": 4096, 00:28:01.507 "num_blocks": 1310720, 00:28:01.507 "uuid": "8e582add-5d93-4836-9fac-76f4cbba3915", 00:28:01.507 "numa_id": -1, 00:28:01.507 "assigned_rate_limits": { 00:28:01.507 "rw_ios_per_sec": 0, 00:28:01.507 "rw_mbytes_per_sec": 0, 00:28:01.507 "r_mbytes_per_sec": 0, 00:28:01.507 "w_mbytes_per_sec": 0 00:28:01.507 }, 00:28:01.507 "claimed": true, 00:28:01.507 "claim_type": "read_many_write_one", 00:28:01.507 "zoned": false, 00:28:01.507 "supported_io_types": { 00:28:01.507 "read": true, 00:28:01.507 "write": true, 00:28:01.507 "unmap": true, 00:28:01.507 "flush": true, 00:28:01.507 "reset": true, 00:28:01.507 "nvme_admin": true, 00:28:01.507 "nvme_io": true, 00:28:01.507 "nvme_io_md": false, 00:28:01.507 "write_zeroes": true, 00:28:01.507 "zcopy": false, 00:28:01.507 "get_zone_info": false, 00:28:01.507 "zone_management": false, 00:28:01.507 "zone_append": false, 00:28:01.507 "compare": true, 00:28:01.507 "compare_and_write": false, 00:28:01.507 "abort": true, 00:28:01.507 "seek_hole": false, 00:28:01.507 "seek_data": false, 00:28:01.507 "copy": true, 00:28:01.507 "nvme_iov_md": false 00:28:01.507 }, 00:28:01.507 "driver_specific": { 00:28:01.507 "nvme": [ 00:28:01.507 { 00:28:01.507 "pci_address": "0000:00:11.0", 00:28:01.507 "trid": { 00:28:01.507 "trtype": "PCIe", 00:28:01.507 "traddr": "0000:00:11.0" 00:28:01.507 }, 00:28:01.507 "ctrlr_data": { 00:28:01.507 "cntlid": 0, 00:28:01.507 "vendor_id": "0x1b36", 00:28:01.507 "model_number": "QEMU NVMe Ctrl", 00:28:01.507 "serial_number": "12341", 00:28:01.507 "firmware_revision": "8.0.0", 00:28:01.507 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:01.507 "oacs": { 00:28:01.507 "security": 0, 00:28:01.507 "format": 1, 00:28:01.507 "firmware": 0, 00:28:01.507 "ns_manage": 1 00:28:01.507 }, 00:28:01.507 "multi_ctrlr": false, 00:28:01.507 "ana_reporting": false 00:28:01.507 }, 00:28:01.507 "vs": { 00:28:01.507 "nvme_version": "1.4" 00:28:01.507 }, 00:28:01.507 "ns_data": { 00:28:01.507 "id": 1, 00:28:01.507 "can_share": false 00:28:01.507 } 00:28:01.507 } 00:28:01.507 ], 00:28:01.507 "mp_policy": "active_passive" 00:28:01.507 } 00:28:01.507 } 00:28:01.507 ]' 00:28:01.507 09:11:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:01.507 09:11:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:01.507 09:11:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:01.507 09:11:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:01.507 09:11:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:01.507 09:11:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:28:01.507 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:01.507 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:01.507 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:01.507 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:01.507 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:01.769 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=846d5a56-ca3f-48e7-aa04-557594324c75 00:28:01.769 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:01.769 09:11:55 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 846d5a56-ca3f-48e7-aa04-557594324c75 00:28:02.030 09:11:56 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:02.292 09:11:56 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=f00d4610-f7bc-4d47-8023-01799efa4ceb 00:28:02.292 09:11:56 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f00d4610-f7bc-4d47-8023-01799efa4ceb 00:28:02.554 09:11:56 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=861088ec-1ec4-4d47-a385-d613695dbe34 00:28:02.554 09:11:56 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:02.554 09:11:56 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 861088ec-1ec4-4d47-a385-d613695dbe34 00:28:02.554 09:11:56 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:02.554 09:11:56 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:02.554 09:11:56 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=861088ec-1ec4-4d47-a385-d613695dbe34 00:28:02.554 09:11:56 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:02.554 09:11:56 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 861088ec-1ec4-4d47-a385-d613695dbe34 00:28:02.554 09:11:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=861088ec-1ec4-4d47-a385-d613695dbe34 00:28:02.554 09:11:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:02.554 09:11:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:02.554 09:11:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:02.554 09:11:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 861088ec-1ec4-4d47-a385-d613695dbe34 00:28:02.815 09:11:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:02.815 { 00:28:02.815 "name": "861088ec-1ec4-4d47-a385-d613695dbe34", 00:28:02.815 "aliases": [ 00:28:02.815 "lvs/nvme0n1p0" 00:28:02.815 ], 00:28:02.815 "product_name": "Logical Volume", 00:28:02.815 "block_size": 4096, 00:28:02.815 "num_blocks": 26476544, 00:28:02.815 "uuid": "861088ec-1ec4-4d47-a385-d613695dbe34", 00:28:02.815 "assigned_rate_limits": { 00:28:02.815 "rw_ios_per_sec": 0, 00:28:02.815 "rw_mbytes_per_sec": 0, 00:28:02.815 "r_mbytes_per_sec": 0, 00:28:02.815 "w_mbytes_per_sec": 0 00:28:02.815 }, 00:28:02.815 "claimed": false, 00:28:02.815 "zoned": false, 00:28:02.815 "supported_io_types": { 00:28:02.815 "read": true, 00:28:02.815 "write": true, 00:28:02.815 "unmap": true, 00:28:02.815 "flush": false, 00:28:02.815 "reset": true, 00:28:02.815 "nvme_admin": false, 00:28:02.815 "nvme_io": false, 00:28:02.815 "nvme_io_md": false, 00:28:02.815 "write_zeroes": true, 00:28:02.815 "zcopy": false, 00:28:02.815 "get_zone_info": false, 00:28:02.815 "zone_management": false, 00:28:02.815 "zone_append": false, 00:28:02.815 "compare": false, 00:28:02.815 "compare_and_write": false, 00:28:02.815 "abort": false, 00:28:02.815 "seek_hole": true, 00:28:02.815 "seek_data": true, 00:28:02.815 "copy": false, 00:28:02.815 "nvme_iov_md": false 00:28:02.815 }, 00:28:02.815 "driver_specific": { 00:28:02.815 "lvol": { 00:28:02.815 "lvol_store_uuid": "f00d4610-f7bc-4d47-8023-01799efa4ceb", 00:28:02.815 "base_bdev": "nvme0n1", 00:28:02.815 "thin_provision": true, 00:28:02.815 "num_allocated_clusters": 0, 00:28:02.815 "snapshot": false, 00:28:02.815 "clone": false, 00:28:02.815 "esnap_clone": false 00:28:02.815 } 00:28:02.815 } 00:28:02.815 } 00:28:02.815 ]' 00:28:02.815 09:11:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:02.815 09:11:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:02.815 09:11:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:02.815 09:11:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:02.815 09:11:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:02.815 09:11:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:02.815 09:11:56 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:02.815 09:11:56 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:02.815 09:11:56 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:03.076 09:11:57 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:03.077 09:11:57 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:03.077 09:11:57 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 861088ec-1ec4-4d47-a385-d613695dbe34 00:28:03.077 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=861088ec-1ec4-4d47-a385-d613695dbe34 00:28:03.077 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:03.077 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:03.077 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:03.077 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 861088ec-1ec4-4d47-a385-d613695dbe34 00:28:03.338 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:03.338 { 00:28:03.338 "name": "861088ec-1ec4-4d47-a385-d613695dbe34", 00:28:03.338 "aliases": [ 00:28:03.338 "lvs/nvme0n1p0" 00:28:03.338 ], 00:28:03.338 "product_name": "Logical Volume", 00:28:03.338 "block_size": 4096, 00:28:03.338 "num_blocks": 26476544, 00:28:03.338 "uuid": "861088ec-1ec4-4d47-a385-d613695dbe34", 00:28:03.338 "assigned_rate_limits": { 00:28:03.338 "rw_ios_per_sec": 0, 00:28:03.338 "rw_mbytes_per_sec": 0, 00:28:03.338 "r_mbytes_per_sec": 0, 00:28:03.338 "w_mbytes_per_sec": 0 00:28:03.338 }, 00:28:03.338 "claimed": false, 00:28:03.338 "zoned": false, 00:28:03.338 "supported_io_types": { 00:28:03.338 "read": true, 00:28:03.338 "write": true, 00:28:03.338 "unmap": true, 00:28:03.338 "flush": false, 00:28:03.338 "reset": true, 00:28:03.338 "nvme_admin": false, 00:28:03.338 "nvme_io": false, 00:28:03.338 "nvme_io_md": false, 00:28:03.338 "write_zeroes": true, 00:28:03.338 "zcopy": false, 00:28:03.338 "get_zone_info": false, 00:28:03.338 "zone_management": false, 00:28:03.338 "zone_append": false, 00:28:03.338 "compare": false, 00:28:03.338 "compare_and_write": false, 00:28:03.339 "abort": false, 00:28:03.339 "seek_hole": true, 00:28:03.339 "seek_data": true, 00:28:03.339 "copy": false, 00:28:03.339 "nvme_iov_md": false 00:28:03.339 }, 00:28:03.339 "driver_specific": { 00:28:03.339 "lvol": { 00:28:03.339 "lvol_store_uuid": "f00d4610-f7bc-4d47-8023-01799efa4ceb", 00:28:03.339 "base_bdev": "nvme0n1", 00:28:03.339 "thin_provision": true, 00:28:03.339 "num_allocated_clusters": 0, 00:28:03.339 "snapshot": false, 00:28:03.339 "clone": false, 00:28:03.339 "esnap_clone": false 00:28:03.339 } 00:28:03.339 } 00:28:03.339 } 00:28:03.339 ]' 00:28:03.339 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:03.339 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:03.339 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:03.339 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:03.339 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:03.339 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:03.339 09:11:57 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:03.339 09:11:57 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:03.600 09:11:57 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:03.600 09:11:57 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 861088ec-1ec4-4d47-a385-d613695dbe34 00:28:03.600 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=861088ec-1ec4-4d47-a385-d613695dbe34 00:28:03.600 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:03.600 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:03.600 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:03.600 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 861088ec-1ec4-4d47-a385-d613695dbe34 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:03.862 { 00:28:03.862 "name": "861088ec-1ec4-4d47-a385-d613695dbe34", 00:28:03.862 "aliases": [ 00:28:03.862 "lvs/nvme0n1p0" 00:28:03.862 ], 00:28:03.862 "product_name": "Logical Volume", 00:28:03.862 "block_size": 4096, 00:28:03.862 "num_blocks": 26476544, 00:28:03.862 "uuid": "861088ec-1ec4-4d47-a385-d613695dbe34", 00:28:03.862 "assigned_rate_limits": { 00:28:03.862 "rw_ios_per_sec": 0, 00:28:03.862 "rw_mbytes_per_sec": 0, 00:28:03.862 "r_mbytes_per_sec": 0, 00:28:03.862 "w_mbytes_per_sec": 0 00:28:03.862 }, 00:28:03.862 "claimed": false, 00:28:03.862 "zoned": false, 00:28:03.862 "supported_io_types": { 00:28:03.862 "read": true, 00:28:03.862 "write": true, 00:28:03.862 "unmap": true, 00:28:03.862 "flush": false, 00:28:03.862 "reset": true, 00:28:03.862 "nvme_admin": false, 00:28:03.862 "nvme_io": false, 00:28:03.862 "nvme_io_md": false, 00:28:03.862 "write_zeroes": true, 00:28:03.862 "zcopy": false, 00:28:03.862 "get_zone_info": false, 00:28:03.862 "zone_management": false, 00:28:03.862 "zone_append": false, 00:28:03.862 "compare": false, 00:28:03.862 "compare_and_write": false, 00:28:03.862 "abort": false, 00:28:03.862 "seek_hole": true, 00:28:03.862 "seek_data": true, 00:28:03.862 "copy": false, 00:28:03.862 "nvme_iov_md": false 00:28:03.862 }, 00:28:03.862 "driver_specific": { 00:28:03.862 "lvol": { 00:28:03.862 "lvol_store_uuid": "f00d4610-f7bc-4d47-8023-01799efa4ceb", 00:28:03.862 "base_bdev": "nvme0n1", 00:28:03.862 "thin_provision": true, 00:28:03.862 "num_allocated_clusters": 0, 00:28:03.862 "snapshot": false, 00:28:03.862 "clone": false, 00:28:03.862 "esnap_clone": false 00:28:03.862 } 00:28:03.862 } 00:28:03.862 } 00:28:03.862 ]' 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 861088ec-1ec4-4d47-a385-d613695dbe34 --l2p_dram_limit 10' 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:03.862 09:11:57 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 861088ec-1ec4-4d47-a385-d613695dbe34 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:04.124 [2024-11-28 09:11:58.005353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.125 [2024-11-28 09:11:58.005504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:04.125 [2024-11-28 09:11:58.005525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:04.125 [2024-11-28 09:11:58.005536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.125 [2024-11-28 09:11:58.005621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.125 [2024-11-28 09:11:58.005635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:04.125 [2024-11-28 09:11:58.005643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:04.125 [2024-11-28 09:11:58.005655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.125 [2024-11-28 09:11:58.005683] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:04.125 [2024-11-28 09:11:58.005974] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:04.125 [2024-11-28 09:11:58.005991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.125 [2024-11-28 09:11:58.006001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:04.125 [2024-11-28 09:11:58.006012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:28:04.125 [2024-11-28 09:11:58.006024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.125 [2024-11-28 09:11:58.006090] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 10168779-6628-4374-9869-80940d4e4796 00:28:04.125 [2024-11-28 09:11:58.007469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.125 [2024-11-28 09:11:58.007490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:04.125 [2024-11-28 09:11:58.007501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:04.125 [2024-11-28 09:11:58.007510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.125 [2024-11-28 09:11:58.014644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.125 [2024-11-28 09:11:58.014677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:04.125 [2024-11-28 09:11:58.014690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.080 ms 00:28:04.125 [2024-11-28 09:11:58.014698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.125 [2024-11-28 09:11:58.014776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.125 [2024-11-28 09:11:58.014785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:04.125 [2024-11-28 09:11:58.014794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:04.125 [2024-11-28 09:11:58.014820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.125 [2024-11-28 09:11:58.014867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.125 [2024-11-28 09:11:58.014877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:04.125 [2024-11-28 09:11:58.014887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:04.125 [2024-11-28 09:11:58.014895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.125 [2024-11-28 09:11:58.014918] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:04.125 [2024-11-28 09:11:58.016701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.125 [2024-11-28 09:11:58.016849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:04.125 [2024-11-28 09:11:58.016867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:28:04.125 [2024-11-28 09:11:58.016877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.125 [2024-11-28 09:11:58.016910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.125 [2024-11-28 09:11:58.016921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:04.125 [2024-11-28 09:11:58.016929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:04.125 [2024-11-28 09:11:58.016940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.125 [2024-11-28 09:11:58.016957] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:04.125 [2024-11-28 09:11:58.017103] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:04.125 [2024-11-28 09:11:58.017115] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:04.125 [2024-11-28 09:11:58.017128] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:04.125 [2024-11-28 09:11:58.017139] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:04.125 [2024-11-28 09:11:58.017150] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:04.125 [2024-11-28 09:11:58.017158] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:04.125 [2024-11-28 09:11:58.017171] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:04.125 [2024-11-28 09:11:58.017179] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:04.125 [2024-11-28 09:11:58.017190] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:04.125 [2024-11-28 09:11:58.017199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.125 [2024-11-28 09:11:58.017212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:04.125 [2024-11-28 09:11:58.017223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:28:04.125 [2024-11-28 09:11:58.017233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.125 [2024-11-28 09:11:58.017316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.125 [2024-11-28 09:11:58.017328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:04.125 [2024-11-28 09:11:58.017335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:04.125 [2024-11-28 09:11:58.017347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.125 [2024-11-28 09:11:58.017441] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:04.125 [2024-11-28 09:11:58.017459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:04.125 [2024-11-28 09:11:58.017468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:04.125 [2024-11-28 09:11:58.017478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.125 [2024-11-28 09:11:58.017487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:04.125 [2024-11-28 09:11:58.017496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:04.125 [2024-11-28 09:11:58.017504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:04.125 [2024-11-28 09:11:58.017516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:04.125 [2024-11-28 09:11:58.017524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:04.125 [2024-11-28 09:11:58.017533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:04.125 [2024-11-28 09:11:58.017541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:04.125 [2024-11-28 09:11:58.017550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:04.125 [2024-11-28 09:11:58.017558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:04.125 [2024-11-28 09:11:58.017570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:04.125 [2024-11-28 09:11:58.017578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:04.125 [2024-11-28 09:11:58.017605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.125 [2024-11-28 09:11:58.017614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:04.125 [2024-11-28 09:11:58.017624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:04.125 [2024-11-28 09:11:58.017631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.125 [2024-11-28 09:11:58.017642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:04.125 [2024-11-28 09:11:58.017649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:04.125 [2024-11-28 09:11:58.017659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.125 [2024-11-28 09:11:58.017667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:04.125 [2024-11-28 09:11:58.017677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:04.125 [2024-11-28 09:11:58.017684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.125 [2024-11-28 09:11:58.017694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:04.125 [2024-11-28 09:11:58.017701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:04.125 [2024-11-28 09:11:58.017710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.125 [2024-11-28 09:11:58.017718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:04.125 [2024-11-28 09:11:58.017733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:04.125 [2024-11-28 09:11:58.017741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.125 [2024-11-28 09:11:58.017751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:04.125 [2024-11-28 09:11:58.017759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:04.125 [2024-11-28 09:11:58.017770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:04.125 [2024-11-28 09:11:58.017777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:04.125 [2024-11-28 09:11:58.017785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:04.125 [2024-11-28 09:11:58.017792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:04.125 [2024-11-28 09:11:58.017814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:04.125 [2024-11-28 09:11:58.017821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:04.125 [2024-11-28 09:11:58.017829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.125 [2024-11-28 09:11:58.017835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:04.125 [2024-11-28 09:11:58.017844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:04.125 [2024-11-28 09:11:58.017850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.125 [2024-11-28 09:11:58.017858] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:04.125 [2024-11-28 09:11:58.017866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:04.125 [2024-11-28 09:11:58.017878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:04.125 [2024-11-28 09:11:58.017885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.126 [2024-11-28 09:11:58.017895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:04.126 [2024-11-28 09:11:58.017902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:04.126 [2024-11-28 09:11:58.017910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:04.126 [2024-11-28 09:11:58.017917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:04.126 [2024-11-28 09:11:58.017925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:04.126 [2024-11-28 09:11:58.017932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:04.126 [2024-11-28 09:11:58.017944] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:04.126 [2024-11-28 09:11:58.017953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.126 [2024-11-28 09:11:58.017963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:04.126 [2024-11-28 09:11:58.017971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:04.126 [2024-11-28 09:11:58.017980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:04.126 [2024-11-28 09:11:58.017987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:04.126 [2024-11-28 09:11:58.017998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:04.126 [2024-11-28 09:11:58.018005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:04.126 [2024-11-28 09:11:58.018016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:04.126 [2024-11-28 09:11:58.018024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:04.126 [2024-11-28 09:11:58.018033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:04.126 [2024-11-28 09:11:58.018040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:04.126 [2024-11-28 09:11:58.018048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:04.126 [2024-11-28 09:11:58.018056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:04.126 [2024-11-28 09:11:58.018065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:04.126 [2024-11-28 09:11:58.018072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:04.126 [2024-11-28 09:11:58.018081] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:04.126 [2024-11-28 09:11:58.018091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.126 [2024-11-28 09:11:58.018101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:04.126 [2024-11-28 09:11:58.018108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:04.126 [2024-11-28 09:11:58.018117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:04.126 [2024-11-28 09:11:58.018125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:04.126 [2024-11-28 09:11:58.018134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.126 [2024-11-28 09:11:58.018142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:04.126 [2024-11-28 09:11:58.018153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:28:04.126 [2024-11-28 09:11:58.018160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.126 [2024-11-28 09:11:58.018207] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:04.126 [2024-11-28 09:11:58.018216] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:08.332 [2024-11-28 09:12:02.312469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.332 [2024-11-28 09:12:02.312824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:08.332 [2024-11-28 09:12:02.312865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4294.235 ms 00:28:08.332 [2024-11-28 09:12:02.312877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.332 [2024-11-28 09:12:02.332098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.332 [2024-11-28 09:12:02.332161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:08.332 [2024-11-28 09:12:02.332181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.077 ms 00:28:08.332 [2024-11-28 09:12:02.332191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.332 [2024-11-28 09:12:02.332336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.332 [2024-11-28 09:12:02.332347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:08.332 [2024-11-28 09:12:02.332364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:08.332 [2024-11-28 09:12:02.332374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.332 [2024-11-28 09:12:02.348280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.332 [2024-11-28 09:12:02.348345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:08.332 [2024-11-28 09:12:02.348362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.831 ms 00:28:08.332 [2024-11-28 09:12:02.348371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.332 [2024-11-28 09:12:02.348414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.332 [2024-11-28 09:12:02.348430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:08.332 [2024-11-28 09:12:02.348443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:08.332 [2024-11-28 09:12:02.348451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.332 [2024-11-28 09:12:02.349207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.332 [2024-11-28 09:12:02.349249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:08.332 [2024-11-28 09:12:02.349268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:28:08.332 [2024-11-28 09:12:02.349278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.332 [2024-11-28 09:12:02.349407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.332 [2024-11-28 09:12:02.349416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:08.332 [2024-11-28 09:12:02.349437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:28:08.332 [2024-11-28 09:12:02.349451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.332 [2024-11-28 09:12:02.372024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.332 [2024-11-28 09:12:02.372278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:08.332 [2024-11-28 09:12:02.372305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.543 ms 00:28:08.332 [2024-11-28 09:12:02.372317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.332 [2024-11-28 09:12:02.383773] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:08.332 [2024-11-28 09:12:02.388887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.332 [2024-11-28 09:12:02.388939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:08.332 [2024-11-28 09:12:02.388952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.454 ms 00:28:08.332 [2024-11-28 09:12:02.388965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.593 [2024-11-28 09:12:02.488936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.593 [2024-11-28 09:12:02.489007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:08.593 [2024-11-28 09:12:02.489033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.932 ms 00:28:08.593 [2024-11-28 09:12:02.489049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.593 [2024-11-28 09:12:02.489278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.593 [2024-11-28 09:12:02.489295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:08.593 [2024-11-28 09:12:02.489304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:28:08.593 [2024-11-28 09:12:02.489315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.593 [2024-11-28 09:12:02.495679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.593 [2024-11-28 09:12:02.495888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:08.593 [2024-11-28 09:12:02.495909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.325 ms 00:28:08.593 [2024-11-28 09:12:02.495921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.593 [2024-11-28 09:12:02.501151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.593 [2024-11-28 09:12:02.501207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:08.593 [2024-11-28 09:12:02.501219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.181 ms 00:28:08.593 [2024-11-28 09:12:02.501230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.593 [2024-11-28 09:12:02.501650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.593 [2024-11-28 09:12:02.501667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:08.593 [2024-11-28 09:12:02.501678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:28:08.593 [2024-11-28 09:12:02.501693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.593 [2024-11-28 09:12:02.547590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.593 [2024-11-28 09:12:02.547817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:08.593 [2024-11-28 09:12:02.547838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.874 ms 00:28:08.593 [2024-11-28 09:12:02.547851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.593 [2024-11-28 09:12:02.558466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.593 [2024-11-28 09:12:02.558577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:08.593 [2024-11-28 09:12:02.558608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.082 ms 00:28:08.593 [2024-11-28 09:12:02.558643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.593 [2024-11-28 09:12:02.565521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.593 [2024-11-28 09:12:02.565581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:08.594 [2024-11-28 09:12:02.565603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.797 ms 00:28:08.594 [2024-11-28 09:12:02.565614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.594 [2024-11-28 09:12:02.572232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.594 [2024-11-28 09:12:02.572295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:08.594 [2024-11-28 09:12:02.572306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.569 ms 00:28:08.594 [2024-11-28 09:12:02.572321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.594 [2024-11-28 09:12:02.572376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.594 [2024-11-28 09:12:02.572390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:08.594 [2024-11-28 09:12:02.572400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:08.594 [2024-11-28 09:12:02.572411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.594 [2024-11-28 09:12:02.572499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:08.594 [2024-11-28 09:12:02.572512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:08.594 [2024-11-28 09:12:02.572521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:08.594 [2024-11-28 09:12:02.572551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:08.594 [2024-11-28 09:12:02.573929] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4567.971 ms, result 0 00:28:08.594 { 00:28:08.594 "name": "ftl0", 00:28:08.594 "uuid": "10168779-6628-4374-9869-80940d4e4796" 00:28:08.594 } 00:28:08.594 09:12:02 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:08.594 09:12:02 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:08.855 09:12:02 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:08.855 09:12:02 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:09.118 [2024-11-28 09:12:03.007475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.118 [2024-11-28 09:12:03.007687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:09.118 [2024-11-28 09:12:03.007720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:09.118 [2024-11-28 09:12:03.007730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.118 [2024-11-28 09:12:03.007770] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:09.118 [2024-11-28 09:12:03.008765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.118 [2024-11-28 09:12:03.008844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:09.118 [2024-11-28 09:12:03.008856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:28:09.118 [2024-11-28 09:12:03.008868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.118 [2024-11-28 09:12:03.009138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.118 [2024-11-28 09:12:03.009152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:09.118 [2024-11-28 09:12:03.009161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:28:09.118 [2024-11-28 09:12:03.009173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.118 [2024-11-28 09:12:03.012435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.118 [2024-11-28 09:12:03.012465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:09.118 [2024-11-28 09:12:03.012475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.245 ms 00:28:09.118 [2024-11-28 09:12:03.012486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.118 [2024-11-28 09:12:03.018809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.118 [2024-11-28 09:12:03.018862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:09.118 [2024-11-28 09:12:03.018873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.304 ms 00:28:09.118 [2024-11-28 09:12:03.018883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.118 [2024-11-28 09:12:03.022201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.118 [2024-11-28 09:12:03.022389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:09.118 [2024-11-28 09:12:03.022407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:28:09.118 [2024-11-28 09:12:03.022417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.118 [2024-11-28 09:12:03.029053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.118 [2024-11-28 09:12:03.029233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:09.118 [2024-11-28 09:12:03.029252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.594 ms 00:28:09.118 [2024-11-28 09:12:03.029262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.118 [2024-11-28 09:12:03.029411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.118 [2024-11-28 09:12:03.029425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:09.118 [2024-11-28 09:12:03.029435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:09.118 [2024-11-28 09:12:03.029445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.118 [2024-11-28 09:12:03.032703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.118 [2024-11-28 09:12:03.032886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:09.118 [2024-11-28 09:12:03.032904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.231 ms 00:28:09.118 [2024-11-28 09:12:03.032914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.118 [2024-11-28 09:12:03.035660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.118 [2024-11-28 09:12:03.035723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:09.118 [2024-11-28 09:12:03.035733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.703 ms 00:28:09.118 [2024-11-28 09:12:03.035743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.118 [2024-11-28 09:12:03.038013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.118 [2024-11-28 09:12:03.038068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:09.118 [2024-11-28 09:12:03.038078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.224 ms 00:28:09.118 [2024-11-28 09:12:03.038089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.118 [2024-11-28 09:12:03.040485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.118 [2024-11-28 09:12:03.040544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:09.118 [2024-11-28 09:12:03.040555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.322 ms 00:28:09.118 [2024-11-28 09:12:03.040566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.118 [2024-11-28 09:12:03.040613] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:09.118 [2024-11-28 09:12:03.040635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:09.118 [2024-11-28 09:12:03.040957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.040967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.040978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.040986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.040996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:09.119 [2024-11-28 09:12:03.041669] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:09.119 [2024-11-28 09:12:03.041678] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 10168779-6628-4374-9869-80940d4e4796 00:28:09.119 [2024-11-28 09:12:03.041695] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:09.119 [2024-11-28 09:12:03.041703] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:09.119 [2024-11-28 09:12:03.041713] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:09.119 [2024-11-28 09:12:03.041722] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:09.119 [2024-11-28 09:12:03.041732] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:09.119 [2024-11-28 09:12:03.041741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:09.119 [2024-11-28 09:12:03.041751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:09.119 [2024-11-28 09:12:03.041758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:09.119 [2024-11-28 09:12:03.041768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:09.119 [2024-11-28 09:12:03.041776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.119 [2024-11-28 09:12:03.041790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:09.119 [2024-11-28 09:12:03.041817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:28:09.119 [2024-11-28 09:12:03.041828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.119 [2024-11-28 09:12:03.044870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.119 [2024-11-28 09:12:03.044914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:09.119 [2024-11-28 09:12:03.044926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.019 ms 00:28:09.119 [2024-11-28 09:12:03.044937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.119 [2024-11-28 09:12:03.045088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.119 [2024-11-28 09:12:03.045101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:09.119 [2024-11-28 09:12:03.045110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:28:09.119 [2024-11-28 09:12:03.045120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.119 [2024-11-28 09:12:03.056075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:09.119 [2024-11-28 09:12:03.056136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:09.120 [2024-11-28 09:12:03.056148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:09.120 [2024-11-28 09:12:03.056160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.120 [2024-11-28 09:12:03.056231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:09.120 [2024-11-28 09:12:03.056243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:09.120 [2024-11-28 09:12:03.056252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:09.120 [2024-11-28 09:12:03.056263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.120 [2024-11-28 09:12:03.056348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:09.120 [2024-11-28 09:12:03.056366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:09.120 [2024-11-28 09:12:03.056376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:09.120 [2024-11-28 09:12:03.056386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.120 [2024-11-28 09:12:03.056404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:09.120 [2024-11-28 09:12:03.056418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:09.120 [2024-11-28 09:12:03.056426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:09.120 [2024-11-28 09:12:03.056436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.120 [2024-11-28 09:12:03.075895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:09.120 [2024-11-28 09:12:03.075961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:09.120 [2024-11-28 09:12:03.075974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:09.120 [2024-11-28 09:12:03.075986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.120 [2024-11-28 09:12:03.092409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:09.120 [2024-11-28 09:12:03.092479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:09.120 [2024-11-28 09:12:03.092491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:09.120 [2024-11-28 09:12:03.092506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.120 [2024-11-28 09:12:03.092604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:09.120 [2024-11-28 09:12:03.092622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:09.120 [2024-11-28 09:12:03.092637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:09.120 [2024-11-28 09:12:03.092649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.120 [2024-11-28 09:12:03.092701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:09.120 [2024-11-28 09:12:03.092714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:09.120 [2024-11-28 09:12:03.092730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:09.120 [2024-11-28 09:12:03.092741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.120 [2024-11-28 09:12:03.092879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:09.120 [2024-11-28 09:12:03.092893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:09.120 [2024-11-28 09:12:03.092903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:09.120 [2024-11-28 09:12:03.092914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.120 [2024-11-28 09:12:03.092952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:09.120 [2024-11-28 09:12:03.092969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:09.120 [2024-11-28 09:12:03.092979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:09.120 [2024-11-28 09:12:03.092995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.120 [2024-11-28 09:12:03.093048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:09.120 [2024-11-28 09:12:03.093064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:09.120 [2024-11-28 09:12:03.093076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:09.120 [2024-11-28 09:12:03.093088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.120 [2024-11-28 09:12:03.093152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:09.120 [2024-11-28 09:12:03.093167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:09.120 [2024-11-28 09:12:03.093180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:09.120 [2024-11-28 09:12:03.093192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.120 [2024-11-28 09:12:03.093383] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.851 ms, result 0 00:28:09.120 true 00:28:09.120 09:12:03 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 93197 00:28:09.120 09:12:03 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93197 ']' 00:28:09.120 09:12:03 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93197 00:28:09.120 09:12:03 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:28:09.120 09:12:03 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:09.120 09:12:03 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93197 00:28:09.120 killing process with pid 93197 00:28:09.120 09:12:03 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:09.120 09:12:03 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:09.120 09:12:03 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93197' 00:28:09.120 09:12:03 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 93197 00:28:09.120 09:12:03 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 93197 00:28:14.413 09:12:07 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:16.960 262144+0 records in 00:28:16.960 262144+0 records out 00:28:16.960 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.49119 s, 308 MB/s 00:28:16.960 09:12:11 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:19.509 09:12:13 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:19.509 [2024-11-28 09:12:13.325316] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:19.509 [2024-11-28 09:12:13.325422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93419 ] 00:28:19.509 [2024-11-28 09:12:13.471337] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.509 [2024-11-28 09:12:13.544520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.772 [2024-11-28 09:12:13.694898] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:19.772 [2024-11-28 09:12:13.695003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:19.772 [2024-11-28 09:12:13.858671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.772 [2024-11-28 09:12:13.858757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:19.772 [2024-11-28 09:12:13.858779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:19.772 [2024-11-28 09:12:13.858789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.772 [2024-11-28 09:12:13.858873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.772 [2024-11-28 09:12:13.858892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:19.772 [2024-11-28 09:12:13.858902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:19.772 [2024-11-28 09:12:13.858911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.772 [2024-11-28 09:12:13.858937] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:19.772 [2024-11-28 09:12:13.859220] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:19.772 [2024-11-28 09:12:13.859241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.772 [2024-11-28 09:12:13.859250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:19.772 [2024-11-28 09:12:13.859262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:28:19.772 [2024-11-28 09:12:13.859277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.772 [2024-11-28 09:12:13.861533] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:19.772 [2024-11-28 09:12:13.866415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.772 [2024-11-28 09:12:13.866476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:19.772 [2024-11-28 09:12:13.866489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.885 ms 00:28:19.772 [2024-11-28 09:12:13.866499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.772 [2024-11-28 09:12:13.866582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.772 [2024-11-28 09:12:13.866592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:19.772 [2024-11-28 09:12:13.866605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:19.772 [2024-11-28 09:12:13.866617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.772 [2024-11-28 09:12:13.877948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.772 [2024-11-28 09:12:13.877990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:19.772 [2024-11-28 09:12:13.878002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.280 ms 00:28:19.772 [2024-11-28 09:12:13.878019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.772 [2024-11-28 09:12:13.878131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.772 [2024-11-28 09:12:13.878142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:19.772 [2024-11-28 09:12:13.878152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:28:19.772 [2024-11-28 09:12:13.878165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.772 [2024-11-28 09:12:13.878226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.772 [2024-11-28 09:12:13.878236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:19.772 [2024-11-28 09:12:13.878246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:19.772 [2024-11-28 09:12:13.878259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.772 [2024-11-28 09:12:13.878286] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:19.772 [2024-11-28 09:12:13.880937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.772 [2024-11-28 09:12:13.880977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:19.772 [2024-11-28 09:12:13.880990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.660 ms 00:28:19.772 [2024-11-28 09:12:13.880999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.772 [2024-11-28 09:12:13.881040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.772 [2024-11-28 09:12:13.881050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:19.772 [2024-11-28 09:12:13.881068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:19.772 [2024-11-28 09:12:13.881081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.772 [2024-11-28 09:12:13.881107] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:19.772 [2024-11-28 09:12:13.881139] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:19.772 [2024-11-28 09:12:13.881181] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:19.773 [2024-11-28 09:12:13.881199] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:19.773 [2024-11-28 09:12:13.881312] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:19.773 [2024-11-28 09:12:13.881329] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:19.773 [2024-11-28 09:12:13.881342] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:19.773 [2024-11-28 09:12:13.881353] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:19.773 [2024-11-28 09:12:13.881366] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:19.773 [2024-11-28 09:12:13.881375] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:19.773 [2024-11-28 09:12:13.881387] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:19.773 [2024-11-28 09:12:13.881395] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:19.773 [2024-11-28 09:12:13.881404] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:19.773 [2024-11-28 09:12:13.881413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.773 [2024-11-28 09:12:13.881422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:19.773 [2024-11-28 09:12:13.881430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:28:19.773 [2024-11-28 09:12:13.881442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.773 [2024-11-28 09:12:13.881528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.773 [2024-11-28 09:12:13.881540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:19.773 [2024-11-28 09:12:13.881548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:19.773 [2024-11-28 09:12:13.881559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.773 [2024-11-28 09:12:13.881684] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:19.773 [2024-11-28 09:12:13.881702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:19.773 [2024-11-28 09:12:13.881711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.773 [2024-11-28 09:12:13.881728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.773 [2024-11-28 09:12:13.881737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:19.773 [2024-11-28 09:12:13.881744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:19.773 [2024-11-28 09:12:13.881750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:19.773 [2024-11-28 09:12:13.881759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:19.773 [2024-11-28 09:12:13.881766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:19.773 [2024-11-28 09:12:13.881774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.773 [2024-11-28 09:12:13.881781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:19.773 [2024-11-28 09:12:13.881787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:19.773 [2024-11-28 09:12:13.881815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.773 [2024-11-28 09:12:13.881823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:19.773 [2024-11-28 09:12:13.881831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:19.773 [2024-11-28 09:12:13.881842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.773 [2024-11-28 09:12:13.881850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:19.773 [2024-11-28 09:12:13.881857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:19.773 [2024-11-28 09:12:13.881866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.773 [2024-11-28 09:12:13.881874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:19.773 [2024-11-28 09:12:13.881881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:19.773 [2024-11-28 09:12:13.881889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.773 [2024-11-28 09:12:13.881896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:19.773 [2024-11-28 09:12:13.881903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:19.773 [2024-11-28 09:12:13.881910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.773 [2024-11-28 09:12:13.881917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:19.773 [2024-11-28 09:12:13.881925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:19.773 [2024-11-28 09:12:13.881932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.773 [2024-11-28 09:12:13.881945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:19.773 [2024-11-28 09:12:13.881951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:19.773 [2024-11-28 09:12:13.881958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.773 [2024-11-28 09:12:13.881966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:19.773 [2024-11-28 09:12:13.881974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:19.773 [2024-11-28 09:12:13.881981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.773 [2024-11-28 09:12:13.881988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:19.773 [2024-11-28 09:12:13.881995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:19.773 [2024-11-28 09:12:13.882002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.773 [2024-11-28 09:12:13.882009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:19.773 [2024-11-28 09:12:13.882016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:19.773 [2024-11-28 09:12:13.882023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.773 [2024-11-28 09:12:13.882031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:19.773 [2024-11-28 09:12:13.882038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:19.773 [2024-11-28 09:12:13.882047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.773 [2024-11-28 09:12:13.882053] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:19.773 [2024-11-28 09:12:13.882065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:19.773 [2024-11-28 09:12:13.882073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.773 [2024-11-28 09:12:13.882084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.773 [2024-11-28 09:12:13.882095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:19.773 [2024-11-28 09:12:13.882102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:19.773 [2024-11-28 09:12:13.882109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:19.773 [2024-11-28 09:12:13.882116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:19.773 [2024-11-28 09:12:13.882123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:19.773 [2024-11-28 09:12:13.882130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:19.773 [2024-11-28 09:12:13.882139] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:19.773 [2024-11-28 09:12:13.882149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.773 [2024-11-28 09:12:13.882158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:19.773 [2024-11-28 09:12:13.882165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:19.773 [2024-11-28 09:12:13.882172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:19.773 [2024-11-28 09:12:13.882179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:19.773 [2024-11-28 09:12:13.882187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:19.773 [2024-11-28 09:12:13.882198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:19.773 [2024-11-28 09:12:13.882205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:19.773 [2024-11-28 09:12:13.882212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:19.773 [2024-11-28 09:12:13.882220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:19.773 [2024-11-28 09:12:13.882227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:19.773 [2024-11-28 09:12:13.882234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:19.773 [2024-11-28 09:12:13.882241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:19.773 [2024-11-28 09:12:13.882248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:19.773 [2024-11-28 09:12:13.882256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:19.773 [2024-11-28 09:12:13.882264] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:19.773 [2024-11-28 09:12:13.882272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.773 [2024-11-28 09:12:13.882283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:19.773 [2024-11-28 09:12:13.882292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:19.773 [2024-11-28 09:12:13.882300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:19.773 [2024-11-28 09:12:13.882307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:19.773 [2024-11-28 09:12:13.882315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.773 [2024-11-28 09:12:13.882327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:19.773 [2024-11-28 09:12:13.882335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:28:19.774 [2024-11-28 09:12:13.882346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:13.919172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:13.919449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:20.035 [2024-11-28 09:12:13.919541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.757 ms 00:28:20.035 [2024-11-28 09:12:13.919574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:13.919737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:13.919768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:20.035 [2024-11-28 09:12:13.919795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:28:20.035 [2024-11-28 09:12:13.919901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:13.935777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:13.935973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:20.035 [2024-11-28 09:12:13.936033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.752 ms 00:28:20.035 [2024-11-28 09:12:13.936057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:13.936120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:13.936145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:20.035 [2024-11-28 09:12:13.936166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:20.035 [2024-11-28 09:12:13.936196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:13.936989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:13.937139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:20.035 [2024-11-28 09:12:13.937196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:28:20.035 [2024-11-28 09:12:13.937225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:13.937418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:13.937444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:20.035 [2024-11-28 09:12:13.937511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:28:20.035 [2024-11-28 09:12:13.937534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:13.947073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:13.947231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:20.035 [2024-11-28 09:12:13.947295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.498 ms 00:28:20.035 [2024-11-28 09:12:13.947319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:13.952233] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:20.035 [2024-11-28 09:12:13.952417] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:20.035 [2024-11-28 09:12:13.952483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:13.952506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:20.035 [2024-11-28 09:12:13.952527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.019 ms 00:28:20.035 [2024-11-28 09:12:13.952546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:13.969007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:13.969193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:20.035 [2024-11-28 09:12:13.969258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.367 ms 00:28:20.035 [2024-11-28 09:12:13.969281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:13.972560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:13.972717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:20.035 [2024-11-28 09:12:13.972772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.130 ms 00:28:20.035 [2024-11-28 09:12:13.972793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:13.975618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:13.975769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:20.035 [2024-11-28 09:12:13.975853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:28:20.035 [2024-11-28 09:12:13.975901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:13.976367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:13.976483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:20.035 [2024-11-28 09:12:13.976553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:28:20.035 [2024-11-28 09:12:13.976577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:14.006230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:14.006486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:20.035 [2024-11-28 09:12:14.006554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.590 ms 00:28:20.035 [2024-11-28 09:12:14.006578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:14.015882] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:20.035 [2024-11-28 09:12:14.019435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:14.019573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:20.035 [2024-11-28 09:12:14.019626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.790 ms 00:28:20.035 [2024-11-28 09:12:14.019667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:14.019772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:14.019818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:20.035 [2024-11-28 09:12:14.019841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:20.035 [2024-11-28 09:12:14.019861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:14.020051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:14.020083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:20.035 [2024-11-28 09:12:14.020106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:20.035 [2024-11-28 09:12:14.020162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:14.020216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:14.020241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:20.035 [2024-11-28 09:12:14.020263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:20.035 [2024-11-28 09:12:14.020314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.035 [2024-11-28 09:12:14.020385] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:20.035 [2024-11-28 09:12:14.020409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.035 [2024-11-28 09:12:14.020431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:20.035 [2024-11-28 09:12:14.020451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:20.036 [2024-11-28 09:12:14.020470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.036 [2024-11-28 09:12:14.027706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.036 [2024-11-28 09:12:14.027903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:20.036 [2024-11-28 09:12:14.027963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.200 ms 00:28:20.036 [2024-11-28 09:12:14.027986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.036 [2024-11-28 09:12:14.028157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.036 [2024-11-28 09:12:14.028186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:20.036 [2024-11-28 09:12:14.028238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:20.036 [2024-11-28 09:12:14.028267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.036 [2024-11-28 09:12:14.029711] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 170.446 ms, result 0 00:28:20.980  [2024-11-28T09:12:16.046Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-28T09:12:17.479Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-28T09:12:18.047Z] Copying: 43/1024 [MB] (23 MBps) [2024-11-28T09:12:19.433Z] Copying: 72/1024 [MB] (28 MBps) [2024-11-28T09:12:20.378Z] Copying: 96/1024 [MB] (24 MBps) [2024-11-28T09:12:21.322Z] Copying: 108/1024 [MB] (11 MBps) [2024-11-28T09:12:22.264Z] Copying: 124/1024 [MB] (16 MBps) [2024-11-28T09:12:23.204Z] Copying: 140/1024 [MB] (15 MBps) [2024-11-28T09:12:24.147Z] Copying: 153/1024 [MB] (13 MBps) [2024-11-28T09:12:25.144Z] Copying: 171/1024 [MB] (17 MBps) [2024-11-28T09:12:26.080Z] Copying: 201/1024 [MB] (29 MBps) [2024-11-28T09:12:27.456Z] Copying: 227/1024 [MB] (26 MBps) [2024-11-28T09:12:28.396Z] Copying: 250/1024 [MB] (22 MBps) [2024-11-28T09:12:29.341Z] Copying: 281/1024 [MB] (30 MBps) [2024-11-28T09:12:30.285Z] Copying: 295/1024 [MB] (14 MBps) [2024-11-28T09:12:31.226Z] Copying: 308/1024 [MB] (13 MBps) [2024-11-28T09:12:32.168Z] Copying: 322/1024 [MB] (13 MBps) [2024-11-28T09:12:33.111Z] Copying: 337/1024 [MB] (15 MBps) [2024-11-28T09:12:34.053Z] Copying: 352/1024 [MB] (14 MBps) [2024-11-28T09:12:35.437Z] Copying: 368/1024 [MB] (15 MBps) [2024-11-28T09:12:36.378Z] Copying: 381/1024 [MB] (13 MBps) [2024-11-28T09:12:37.313Z] Copying: 394/1024 [MB] (12 MBps) [2024-11-28T09:12:38.276Z] Copying: 412/1024 [MB] (17 MBps) [2024-11-28T09:12:39.219Z] Copying: 432/1024 [MB] (20 MBps) [2024-11-28T09:12:40.164Z] Copying: 450/1024 [MB] (17 MBps) [2024-11-28T09:12:41.108Z] Copying: 464/1024 [MB] (14 MBps) [2024-11-28T09:12:42.054Z] Copying: 480/1024 [MB] (15 MBps) [2024-11-28T09:12:43.445Z] Copying: 493/1024 [MB] (12 MBps) [2024-11-28T09:12:44.386Z] Copying: 503/1024 [MB] (10 MBps) [2024-11-28T09:12:45.341Z] Copying: 525/1024 [MB] (22 MBps) [2024-11-28T09:12:46.288Z] Copying: 539/1024 [MB] (13 MBps) [2024-11-28T09:12:47.232Z] Copying: 552/1024 [MB] (13 MBps) [2024-11-28T09:12:48.198Z] Copying: 568/1024 [MB] (15 MBps) [2024-11-28T09:12:49.174Z] Copying: 583/1024 [MB] (14 MBps) [2024-11-28T09:12:50.119Z] Copying: 601/1024 [MB] (18 MBps) [2024-11-28T09:12:51.064Z] Copying: 621/1024 [MB] (19 MBps) [2024-11-28T09:12:52.453Z] Copying: 633/1024 [MB] (11 MBps) [2024-11-28T09:12:53.405Z] Copying: 645/1024 [MB] (12 MBps) [2024-11-28T09:12:54.350Z] Copying: 657/1024 [MB] (12 MBps) [2024-11-28T09:12:55.292Z] Copying: 668/1024 [MB] (10 MBps) [2024-11-28T09:12:56.230Z] Copying: 678/1024 [MB] (10 MBps) [2024-11-28T09:12:57.173Z] Copying: 700/1024 [MB] (22 MBps) [2024-11-28T09:12:58.112Z] Copying: 723/1024 [MB] (23 MBps) [2024-11-28T09:12:59.046Z] Copying: 739/1024 [MB] (15 MBps) [2024-11-28T09:13:00.421Z] Copying: 766/1024 [MB] (27 MBps) [2024-11-28T09:13:01.361Z] Copying: 790/1024 [MB] (23 MBps) [2024-11-28T09:13:02.302Z] Copying: 817/1024 [MB] (27 MBps) [2024-11-28T09:13:03.248Z] Copying: 838/1024 [MB] (21 MBps) [2024-11-28T09:13:04.188Z] Copying: 858/1024 [MB] (19 MBps) [2024-11-28T09:13:05.124Z] Copying: 881/1024 [MB] (23 MBps) [2024-11-28T09:13:06.058Z] Copying: 903/1024 [MB] (22 MBps) [2024-11-28T09:13:07.430Z] Copying: 930/1024 [MB] (26 MBps) [2024-11-28T09:13:08.372Z] Copying: 971/1024 [MB] (40 MBps) [2024-11-28T09:13:09.316Z] Copying: 1008/1024 [MB] (37 MBps) [2024-11-28T09:13:09.316Z] Copying: 1023/1024 [MB] (15 MBps) [2024-11-28T09:13:09.316Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-28 09:13:09.049123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.196 [2024-11-28 09:13:09.049197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:15.196 [2024-11-28 09:13:09.049216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:15.196 [2024-11-28 09:13:09.049226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.196 [2024-11-28 09:13:09.049251] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:15.196 [2024-11-28 09:13:09.050336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.196 [2024-11-28 09:13:09.050373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:15.196 [2024-11-28 09:13:09.050387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:29:15.196 [2024-11-28 09:13:09.050396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.196 [2024-11-28 09:13:09.053372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.196 [2024-11-28 09:13:09.053566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:15.196 [2024-11-28 09:13:09.053589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.945 ms 00:29:15.196 [2024-11-28 09:13:09.053598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.196 [2024-11-28 09:13:09.053674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.196 [2024-11-28 09:13:09.053689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:15.196 [2024-11-28 09:13:09.053699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:15.196 [2024-11-28 09:13:09.053708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.196 [2024-11-28 09:13:09.053778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.196 [2024-11-28 09:13:09.053787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:15.196 [2024-11-28 09:13:09.053812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:15.196 [2024-11-28 09:13:09.053822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.196 [2024-11-28 09:13:09.053836] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:15.196 [2024-11-28 09:13:09.053850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.053995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.054003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.054011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.054018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.054027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:15.196 [2024-11-28 09:13:09.054034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:15.197 [2024-11-28 09:13:09.054646] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:15.197 [2024-11-28 09:13:09.054657] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 10168779-6628-4374-9869-80940d4e4796 00:29:15.197 [2024-11-28 09:13:09.054666] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:15.197 [2024-11-28 09:13:09.054673] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:15.197 [2024-11-28 09:13:09.054686] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:15.197 [2024-11-28 09:13:09.054695] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:15.197 [2024-11-28 09:13:09.054702] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:15.197 [2024-11-28 09:13:09.054710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:15.197 [2024-11-28 09:13:09.054718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:15.197 [2024-11-28 09:13:09.054724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:15.197 [2024-11-28 09:13:09.054731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:15.197 [2024-11-28 09:13:09.054738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.197 [2024-11-28 09:13:09.054746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:15.197 [2024-11-28 09:13:09.054758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:29:15.197 [2024-11-28 09:13:09.054769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.057940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.198 [2024-11-28 09:13:09.057979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:15.198 [2024-11-28 09:13:09.057990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.153 ms 00:29:15.198 [2024-11-28 09:13:09.057999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.058164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.198 [2024-11-28 09:13:09.058174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:15.198 [2024-11-28 09:13:09.058190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:29:15.198 [2024-11-28 09:13:09.058202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.067708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.198 [2024-11-28 09:13:09.067944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:15.198 [2024-11-28 09:13:09.067968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.198 [2024-11-28 09:13:09.067986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.068067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.198 [2024-11-28 09:13:09.068078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:15.198 [2024-11-28 09:13:09.068087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.198 [2024-11-28 09:13:09.068099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.068169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.198 [2024-11-28 09:13:09.068185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:15.198 [2024-11-28 09:13:09.068194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.198 [2024-11-28 09:13:09.068202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.068219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.198 [2024-11-28 09:13:09.068228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:15.198 [2024-11-28 09:13:09.068237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.198 [2024-11-28 09:13:09.068246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.087416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.198 [2024-11-28 09:13:09.087617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:15.198 [2024-11-28 09:13:09.087638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.198 [2024-11-28 09:13:09.087648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.102155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.198 [2024-11-28 09:13:09.102349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:15.198 [2024-11-28 09:13:09.102368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.198 [2024-11-28 09:13:09.102384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.102446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.198 [2024-11-28 09:13:09.102457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:15.198 [2024-11-28 09:13:09.102467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.198 [2024-11-28 09:13:09.102476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.102514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.198 [2024-11-28 09:13:09.102524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:15.198 [2024-11-28 09:13:09.102533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.198 [2024-11-28 09:13:09.102541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.102610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.198 [2024-11-28 09:13:09.102621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:15.198 [2024-11-28 09:13:09.102630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.198 [2024-11-28 09:13:09.102638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.102664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.198 [2024-11-28 09:13:09.102674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:15.198 [2024-11-28 09:13:09.102682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.198 [2024-11-28 09:13:09.102690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.102739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.198 [2024-11-28 09:13:09.102760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:15.198 [2024-11-28 09:13:09.102770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.198 [2024-11-28 09:13:09.102778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.102855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.198 [2024-11-28 09:13:09.102866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:15.198 [2024-11-28 09:13:09.102876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.198 [2024-11-28 09:13:09.102884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.198 [2024-11-28 09:13:09.103044] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 53.876 ms, result 0 00:29:15.459 00:29:15.459 00:29:15.459 09:13:09 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:15.459 [2024-11-28 09:13:09.560516] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:15.459 [2024-11-28 09:13:09.560688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93987 ] 00:29:15.721 [2024-11-28 09:13:09.718155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.721 [2024-11-28 09:13:09.792033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.983 [2024-11-28 09:13:09.943145] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:15.983 [2024-11-28 09:13:09.943505] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:16.246 [2024-11-28 09:13:10.107882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.246 [2024-11-28 09:13:10.108129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:16.246 [2024-11-28 09:13:10.108264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:16.246 [2024-11-28 09:13:10.108307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.246 [2024-11-28 09:13:10.108417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.246 [2024-11-28 09:13:10.108515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:16.246 [2024-11-28 09:13:10.108538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:16.246 [2024-11-28 09:13:10.108547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.246 [2024-11-28 09:13:10.108582] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:16.246 [2024-11-28 09:13:10.108963] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:16.246 [2024-11-28 09:13:10.108985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.246 [2024-11-28 09:13:10.108998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:16.246 [2024-11-28 09:13:10.109016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:29:16.246 [2024-11-28 09:13:10.109028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.246 [2024-11-28 09:13:10.109373] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:16.246 [2024-11-28 09:13:10.109405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.246 [2024-11-28 09:13:10.109416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:16.246 [2024-11-28 09:13:10.109428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:16.246 [2024-11-28 09:13:10.109437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.246 [2024-11-28 09:13:10.109507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.246 [2024-11-28 09:13:10.109522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:16.246 [2024-11-28 09:13:10.109536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:29:16.246 [2024-11-28 09:13:10.109546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.246 [2024-11-28 09:13:10.110032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.246 [2024-11-28 09:13:10.110077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:16.246 [2024-11-28 09:13:10.110099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:29:16.246 [2024-11-28 09:13:10.110120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.246 [2024-11-28 09:13:10.110234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.246 [2024-11-28 09:13:10.110332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:16.246 [2024-11-28 09:13:10.110359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:29:16.246 [2024-11-28 09:13:10.110385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.246 [2024-11-28 09:13:10.110430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.246 [2024-11-28 09:13:10.110454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:16.246 [2024-11-28 09:13:10.110540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:16.246 [2024-11-28 09:13:10.110574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.246 [2024-11-28 09:13:10.110621] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:16.246 [2024-11-28 09:13:10.113491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.246 [2024-11-28 09:13:10.113673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:16.246 [2024-11-28 09:13:10.113743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.881 ms 00:29:16.246 [2024-11-28 09:13:10.113774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.246 [2024-11-28 09:13:10.113854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.246 [2024-11-28 09:13:10.113878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:16.246 [2024-11-28 09:13:10.113900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:16.246 [2024-11-28 09:13:10.113919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.246 [2024-11-28 09:13:10.113988] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:16.246 [2024-11-28 09:13:10.114140] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:16.246 [2024-11-28 09:13:10.114209] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:16.246 [2024-11-28 09:13:10.114249] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:16.246 [2024-11-28 09:13:10.114382] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:16.246 [2024-11-28 09:13:10.114417] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:16.246 [2024-11-28 09:13:10.114449] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:16.246 [2024-11-28 09:13:10.114482] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:16.246 [2024-11-28 09:13:10.114697] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:16.246 [2024-11-28 09:13:10.114745] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:16.246 [2024-11-28 09:13:10.114866] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:16.246 [2024-11-28 09:13:10.114891] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:16.246 [2024-11-28 09:13:10.114948] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:16.246 [2024-11-28 09:13:10.114973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.246 [2024-11-28 09:13:10.114995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:16.246 [2024-11-28 09:13:10.115036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:29:16.246 [2024-11-28 09:13:10.115048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.246 [2024-11-28 09:13:10.115151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.246 [2024-11-28 09:13:10.115161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:16.246 [2024-11-28 09:13:10.115170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:29:16.246 [2024-11-28 09:13:10.115183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.246 [2024-11-28 09:13:10.115291] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:16.246 [2024-11-28 09:13:10.115304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:16.246 [2024-11-28 09:13:10.115319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:16.246 [2024-11-28 09:13:10.115331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.246 [2024-11-28 09:13:10.115339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:16.246 [2024-11-28 09:13:10.115353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:16.246 [2024-11-28 09:13:10.115360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:16.246 [2024-11-28 09:13:10.115368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:16.246 [2024-11-28 09:13:10.115376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:16.246 [2024-11-28 09:13:10.115382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:16.246 [2024-11-28 09:13:10.115389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:16.246 [2024-11-28 09:13:10.115396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:16.246 [2024-11-28 09:13:10.115403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:16.246 [2024-11-28 09:13:10.115409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:16.246 [2024-11-28 09:13:10.115416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:16.246 [2024-11-28 09:13:10.115423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.246 [2024-11-28 09:13:10.115429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:16.246 [2024-11-28 09:13:10.115436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:16.246 [2024-11-28 09:13:10.115443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.246 [2024-11-28 09:13:10.115455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:16.246 [2024-11-28 09:13:10.115463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:16.246 [2024-11-28 09:13:10.115469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.246 [2024-11-28 09:13:10.115476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:16.246 [2024-11-28 09:13:10.115483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:16.246 [2024-11-28 09:13:10.115495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.246 [2024-11-28 09:13:10.115502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:16.246 [2024-11-28 09:13:10.115509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:16.246 [2024-11-28 09:13:10.115515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.247 [2024-11-28 09:13:10.115522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:16.247 [2024-11-28 09:13:10.115529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:16.247 [2024-11-28 09:13:10.115536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.247 [2024-11-28 09:13:10.115545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:16.247 [2024-11-28 09:13:10.115552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:16.247 [2024-11-28 09:13:10.115559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:16.247 [2024-11-28 09:13:10.115566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:16.247 [2024-11-28 09:13:10.115578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:16.247 [2024-11-28 09:13:10.115584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:16.247 [2024-11-28 09:13:10.115591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:16.247 [2024-11-28 09:13:10.115598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:16.247 [2024-11-28 09:13:10.115605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.247 [2024-11-28 09:13:10.115611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:16.247 [2024-11-28 09:13:10.115617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:16.247 [2024-11-28 09:13:10.115624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.247 [2024-11-28 09:13:10.115630] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:16.247 [2024-11-28 09:13:10.115639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:16.247 [2024-11-28 09:13:10.115647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:16.247 [2024-11-28 09:13:10.115658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.247 [2024-11-28 09:13:10.115668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:16.247 [2024-11-28 09:13:10.115676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:16.247 [2024-11-28 09:13:10.115684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:16.247 [2024-11-28 09:13:10.115692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:16.247 [2024-11-28 09:13:10.115702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:16.247 [2024-11-28 09:13:10.115711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:16.247 [2024-11-28 09:13:10.115721] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:16.247 [2024-11-28 09:13:10.115736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:16.247 [2024-11-28 09:13:10.115745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:16.247 [2024-11-28 09:13:10.115757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:16.247 [2024-11-28 09:13:10.115766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:16.247 [2024-11-28 09:13:10.115774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:16.247 [2024-11-28 09:13:10.115783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:16.247 [2024-11-28 09:13:10.115792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:16.247 [2024-11-28 09:13:10.115825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:16.247 [2024-11-28 09:13:10.115834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:16.247 [2024-11-28 09:13:10.115844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:16.247 [2024-11-28 09:13:10.115853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:16.247 [2024-11-28 09:13:10.115863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:16.247 [2024-11-28 09:13:10.115871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:16.247 [2024-11-28 09:13:10.115883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:16.247 [2024-11-28 09:13:10.115893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:16.247 [2024-11-28 09:13:10.115902] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:16.247 [2024-11-28 09:13:10.115911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:16.247 [2024-11-28 09:13:10.115925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:16.247 [2024-11-28 09:13:10.115933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:16.247 [2024-11-28 09:13:10.115942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:16.247 [2024-11-28 09:13:10.115951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:16.247 [2024-11-28 09:13:10.115960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.247 [2024-11-28 09:13:10.115973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:16.247 [2024-11-28 09:13:10.115983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:29:16.247 [2024-11-28 09:13:10.115992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.247 [2024-11-28 09:13:10.140332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.247 [2024-11-28 09:13:10.140548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:16.247 [2024-11-28 09:13:10.140628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.292 ms 00:29:16.247 [2024-11-28 09:13:10.140653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.247 [2024-11-28 09:13:10.140768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.247 [2024-11-28 09:13:10.140792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:16.247 [2024-11-28 09:13:10.140832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:16.247 [2024-11-28 09:13:10.140853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.247 [2024-11-28 09:13:10.157515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.247 [2024-11-28 09:13:10.157718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:16.247 [2024-11-28 09:13:10.157786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.567 ms 00:29:16.247 [2024-11-28 09:13:10.157844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.247 [2024-11-28 09:13:10.157907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.247 [2024-11-28 09:13:10.157930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:16.247 [2024-11-28 09:13:10.157951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:16.247 [2024-11-28 09:13:10.157971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.247 [2024-11-28 09:13:10.158105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.247 [2024-11-28 09:13:10.158314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:16.247 [2024-11-28 09:13:10.158342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:16.247 [2024-11-28 09:13:10.158368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.247 [2024-11-28 09:13:10.158532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.247 [2024-11-28 09:13:10.158561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:16.247 [2024-11-28 09:13:10.158638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:29:16.247 [2024-11-28 09:13:10.158667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.247 [2024-11-28 09:13:10.168477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.247 [2024-11-28 09:13:10.168646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:16.247 [2024-11-28 09:13:10.168715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.770 ms 00:29:16.247 [2024-11-28 09:13:10.169156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.247 [2024-11-28 09:13:10.169400] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:16.247 [2024-11-28 09:13:10.169551] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:16.247 [2024-11-28 09:13:10.169591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.247 [2024-11-28 09:13:10.169676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:16.247 [2024-11-28 09:13:10.169702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:29:16.247 [2024-11-28 09:13:10.169729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.247 [2024-11-28 09:13:10.182242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.247 [2024-11-28 09:13:10.182414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:16.247 [2024-11-28 09:13:10.182478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.476 ms 00:29:16.247 [2024-11-28 09:13:10.182509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.247 [2024-11-28 09:13:10.182665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.247 [2024-11-28 09:13:10.182690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:16.247 [2024-11-28 09:13:10.182832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:29:16.247 [2024-11-28 09:13:10.182853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.247 [2024-11-28 09:13:10.182965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.247 [2024-11-28 09:13:10.182994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:16.247 [2024-11-28 09:13:10.183027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:16.247 [2024-11-28 09:13:10.183054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.247 [2024-11-28 09:13:10.183397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.247 [2024-11-28 09:13:10.183431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:16.247 [2024-11-28 09:13:10.183451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:29:16.247 [2024-11-28 09:13:10.183474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.248 [2024-11-28 09:13:10.183577] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:16.248 [2024-11-28 09:13:10.183613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.248 [2024-11-28 09:13:10.183680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:16.248 [2024-11-28 09:13:10.183705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:16.248 [2024-11-28 09:13:10.183730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.248 [2024-11-28 09:13:10.194730] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:16.248 [2024-11-28 09:13:10.195051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.248 [2024-11-28 09:13:10.195090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:16.248 [2024-11-28 09:13:10.195160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.259 ms 00:29:16.248 [2024-11-28 09:13:10.195214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.248 [2024-11-28 09:13:10.197878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.248 [2024-11-28 09:13:10.198025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:16.248 [2024-11-28 09:13:10.198043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.614 ms 00:29:16.248 [2024-11-28 09:13:10.198059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.248 [2024-11-28 09:13:10.198182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.248 [2024-11-28 09:13:10.198194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:16.248 [2024-11-28 09:13:10.198204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:16.248 [2024-11-28 09:13:10.198212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.248 [2024-11-28 09:13:10.198243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.248 [2024-11-28 09:13:10.198263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:16.248 [2024-11-28 09:13:10.198271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:16.248 [2024-11-28 09:13:10.198280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.248 [2024-11-28 09:13:10.198323] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:16.248 [2024-11-28 09:13:10.198337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.248 [2024-11-28 09:13:10.198345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:16.248 [2024-11-28 09:13:10.198354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:16.248 [2024-11-28 09:13:10.198362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.248 [2024-11-28 09:13:10.206400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.248 [2024-11-28 09:13:10.206587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:16.248 [2024-11-28 09:13:10.206644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.017 ms 00:29:16.248 [2024-11-28 09:13:10.206668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.248 [2024-11-28 09:13:10.206764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.248 [2024-11-28 09:13:10.206789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:16.248 [2024-11-28 09:13:10.206836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:16.248 [2024-11-28 09:13:10.206860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.248 [2024-11-28 09:13:10.208264] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 99.833 ms, result 0 00:29:17.635  [2024-11-28T09:13:12.700Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-28T09:13:13.645Z] Copying: 27/1024 [MB] (15 MBps) [2024-11-28T09:13:14.588Z] Copying: 48/1024 [MB] (20 MBps) [2024-11-28T09:13:15.534Z] Copying: 65/1024 [MB] (16 MBps) [2024-11-28T09:13:16.478Z] Copying: 80/1024 [MB] (15 MBps) [2024-11-28T09:13:17.422Z] Copying: 91/1024 [MB] (10 MBps) [2024-11-28T09:13:18.802Z] Copying: 101/1024 [MB] (10 MBps) [2024-11-28T09:13:19.781Z] Copying: 118/1024 [MB] (16 MBps) [2024-11-28T09:13:20.420Z] Copying: 132/1024 [MB] (14 MBps) [2024-11-28T09:13:21.810Z] Copying: 148/1024 [MB] (15 MBps) [2024-11-28T09:13:22.757Z] Copying: 165/1024 [MB] (16 MBps) [2024-11-28T09:13:23.699Z] Copying: 176/1024 [MB] (10 MBps) [2024-11-28T09:13:24.640Z] Copying: 196/1024 [MB] (19 MBps) [2024-11-28T09:13:25.597Z] Copying: 213/1024 [MB] (17 MBps) [2024-11-28T09:13:26.540Z] Copying: 226/1024 [MB] (12 MBps) [2024-11-28T09:13:27.482Z] Copying: 241/1024 [MB] (15 MBps) [2024-11-28T09:13:28.427Z] Copying: 252/1024 [MB] (10 MBps) [2024-11-28T09:13:29.816Z] Copying: 263/1024 [MB] (10 MBps) [2024-11-28T09:13:30.760Z] Copying: 277/1024 [MB] (13 MBps) [2024-11-28T09:13:31.704Z] Copying: 293/1024 [MB] (16 MBps) [2024-11-28T09:13:32.648Z] Copying: 305/1024 [MB] (11 MBps) [2024-11-28T09:13:33.593Z] Copying: 327/1024 [MB] (22 MBps) [2024-11-28T09:13:34.534Z] Copying: 352/1024 [MB] (24 MBps) [2024-11-28T09:13:35.475Z] Copying: 376/1024 [MB] (24 MBps) [2024-11-28T09:13:36.415Z] Copying: 398/1024 [MB] (21 MBps) [2024-11-28T09:13:37.801Z] Copying: 413/1024 [MB] (15 MBps) [2024-11-28T09:13:38.744Z] Copying: 434/1024 [MB] (20 MBps) [2024-11-28T09:13:39.686Z] Copying: 456/1024 [MB] (22 MBps) [2024-11-28T09:13:40.628Z] Copying: 477/1024 [MB] (21 MBps) [2024-11-28T09:13:41.570Z] Copying: 495/1024 [MB] (17 MBps) [2024-11-28T09:13:42.514Z] Copying: 516/1024 [MB] (20 MBps) [2024-11-28T09:13:43.458Z] Copying: 526/1024 [MB] (10 MBps) [2024-11-28T09:13:44.403Z] Copying: 538/1024 [MB] (11 MBps) [2024-11-28T09:13:45.790Z] Copying: 551/1024 [MB] (13 MBps) [2024-11-28T09:13:46.737Z] Copying: 564/1024 [MB] (12 MBps) [2024-11-28T09:13:47.684Z] Copying: 577/1024 [MB] (12 MBps) [2024-11-28T09:13:48.630Z] Copying: 587/1024 [MB] (10 MBps) [2024-11-28T09:13:49.577Z] Copying: 602/1024 [MB] (14 MBps) [2024-11-28T09:13:50.521Z] Copying: 617/1024 [MB] (15 MBps) [2024-11-28T09:13:51.468Z] Copying: 636/1024 [MB] (18 MBps) [2024-11-28T09:13:52.471Z] Copying: 648/1024 [MB] (12 MBps) [2024-11-28T09:13:53.415Z] Copying: 659/1024 [MB] (11 MBps) [2024-11-28T09:13:54.804Z] Copying: 670/1024 [MB] (10 MBps) [2024-11-28T09:13:55.747Z] Copying: 680/1024 [MB] (10 MBps) [2024-11-28T09:13:56.689Z] Copying: 691/1024 [MB] (10 MBps) [2024-11-28T09:13:57.632Z] Copying: 701/1024 [MB] (10 MBps) [2024-11-28T09:13:58.577Z] Copying: 712/1024 [MB] (10 MBps) [2024-11-28T09:13:59.523Z] Copying: 723/1024 [MB] (10 MBps) [2024-11-28T09:14:00.458Z] Copying: 733/1024 [MB] (10 MBps) [2024-11-28T09:14:01.837Z] Copying: 758/1024 [MB] (25 MBps) [2024-11-28T09:14:02.411Z] Copying: 781/1024 [MB] (23 MBps) [2024-11-28T09:14:03.801Z] Copying: 792/1024 [MB] (10 MBps) [2024-11-28T09:14:04.746Z] Copying: 802/1024 [MB] (10 MBps) [2024-11-28T09:14:05.688Z] Copying: 813/1024 [MB] (10 MBps) [2024-11-28T09:14:06.631Z] Copying: 824/1024 [MB] (11 MBps) [2024-11-28T09:14:07.574Z] Copying: 837/1024 [MB] (12 MBps) [2024-11-28T09:14:08.518Z] Copying: 848/1024 [MB] (10 MBps) [2024-11-28T09:14:09.462Z] Copying: 858/1024 [MB] (10 MBps) [2024-11-28T09:14:10.408Z] Copying: 873/1024 [MB] (14 MBps) [2024-11-28T09:14:11.795Z] Copying: 891/1024 [MB] (17 MBps) [2024-11-28T09:14:12.739Z] Copying: 905/1024 [MB] (13 MBps) [2024-11-28T09:14:13.686Z] Copying: 921/1024 [MB] (16 MBps) [2024-11-28T09:14:14.632Z] Copying: 932/1024 [MB] (10 MBps) [2024-11-28T09:14:15.582Z] Copying: 952/1024 [MB] (20 MBps) [2024-11-28T09:14:16.524Z] Copying: 968/1024 [MB] (16 MBps) [2024-11-28T09:14:17.468Z] Copying: 982/1024 [MB] (14 MBps) [2024-11-28T09:14:18.414Z] Copying: 996/1024 [MB] (13 MBps) [2024-11-28T09:14:19.824Z] Copying: 1008/1024 [MB] (11 MBps) [2024-11-28T09:14:20.087Z] Copying: 1019/1024 [MB] (10 MBps) [2024-11-28T09:14:20.087Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-28 09:14:19.900263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.967 [2024-11-28 09:14:19.900351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:25.967 [2024-11-28 09:14:19.900375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:25.967 [2024-11-28 09:14:19.900385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.967 [2024-11-28 09:14:19.900410] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:25.967 [2024-11-28 09:14:19.901442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.967 [2024-11-28 09:14:19.901535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:25.967 [2024-11-28 09:14:19.901598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:30:25.967 [2024-11-28 09:14:19.901607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.967 [2024-11-28 09:14:19.902027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.967 [2024-11-28 09:14:19.902041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:25.967 [2024-11-28 09:14:19.902051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:30:25.967 [2024-11-28 09:14:19.902059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.967 [2024-11-28 09:14:19.902091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.967 [2024-11-28 09:14:19.902102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:25.967 [2024-11-28 09:14:19.902116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:25.967 [2024-11-28 09:14:19.902124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.967 [2024-11-28 09:14:19.902191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.967 [2024-11-28 09:14:19.902202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:25.967 [2024-11-28 09:14:19.902216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:25.967 [2024-11-28 09:14:19.902224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.967 [2024-11-28 09:14:19.902239] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:25.967 [2024-11-28 09:14:19.902255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:25.967 [2024-11-28 09:14:19.902270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:25.967 [2024-11-28 09:14:19.902281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:25.967 [2024-11-28 09:14:19.902293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:25.967 [2024-11-28 09:14:19.902303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:25.967 [2024-11-28 09:14:19.902314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:25.967 [2024-11-28 09:14:19.902325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:25.967 [2024-11-28 09:14:19.902337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:25.967 [2024-11-28 09:14:19.902348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:25.967 [2024-11-28 09:14:19.902360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:25.967 [2024-11-28 09:14:19.902371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:25.967 [2024-11-28 09:14:19.902382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:25.967 [2024-11-28 09:14:19.902392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.902998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:25.968 [2024-11-28 09:14:19.903403] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:25.968 [2024-11-28 09:14:19.903414] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 10168779-6628-4374-9869-80940d4e4796 00:30:25.968 [2024-11-28 09:14:19.903430] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:25.968 [2024-11-28 09:14:19.903441] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:25.969 [2024-11-28 09:14:19.903451] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:25.969 [2024-11-28 09:14:19.903463] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:25.969 [2024-11-28 09:14:19.903475] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:25.969 [2024-11-28 09:14:19.903493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:25.969 [2024-11-28 09:14:19.903509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:25.969 [2024-11-28 09:14:19.903518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:25.969 [2024-11-28 09:14:19.903527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:25.969 [2024-11-28 09:14:19.903536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.969 [2024-11-28 09:14:19.903548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:25.969 [2024-11-28 09:14:19.903559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:30:25.969 [2024-11-28 09:14:19.903575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.908486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.969 [2024-11-28 09:14:19.908687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:25.969 [2024-11-28 09:14:19.909168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.887 ms 00:30:25.969 [2024-11-28 09:14:19.909241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.909587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.969 [2024-11-28 09:14:19.909724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:25.969 [2024-11-28 09:14:19.909817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:30:25.969 [2024-11-28 09:14:19.909909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.919518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.969 [2024-11-28 09:14:19.919689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:25.969 [2024-11-28 09:14:19.919748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.969 [2024-11-28 09:14:19.919771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.919901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.969 [2024-11-28 09:14:19.919929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:25.969 [2024-11-28 09:14:19.919950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.969 [2024-11-28 09:14:19.919970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.920036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.969 [2024-11-28 09:14:19.920059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:25.969 [2024-11-28 09:14:19.920081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.969 [2024-11-28 09:14:19.920148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.920184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.969 [2024-11-28 09:14:19.920249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:25.969 [2024-11-28 09:14:19.920283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.969 [2024-11-28 09:14:19.920332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.940429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.969 [2024-11-28 09:14:19.940491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:25.969 [2024-11-28 09:14:19.940505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.969 [2024-11-28 09:14:19.940515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.957004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.969 [2024-11-28 09:14:19.957064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:25.969 [2024-11-28 09:14:19.957078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.969 [2024-11-28 09:14:19.957087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.957160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.969 [2024-11-28 09:14:19.957171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:25.969 [2024-11-28 09:14:19.957181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.969 [2024-11-28 09:14:19.957190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.957232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.969 [2024-11-28 09:14:19.957243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:25.969 [2024-11-28 09:14:19.957252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.969 [2024-11-28 09:14:19.957269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.957333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.969 [2024-11-28 09:14:19.957346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:25.969 [2024-11-28 09:14:19.957355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.969 [2024-11-28 09:14:19.957365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.957393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.969 [2024-11-28 09:14:19.957413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:25.969 [2024-11-28 09:14:19.957423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.969 [2024-11-28 09:14:19.957432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.957487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.969 [2024-11-28 09:14:19.957501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:25.969 [2024-11-28 09:14:19.957510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.969 [2024-11-28 09:14:19.957524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.957599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.969 [2024-11-28 09:14:19.957611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:25.969 [2024-11-28 09:14:19.957621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.969 [2024-11-28 09:14:19.957631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.969 [2024-11-28 09:14:19.957829] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 57.491 ms, result 0 00:30:26.230 00:30:26.230 00:30:26.230 09:14:20 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:28.780 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:28.781 09:14:22 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:28.781 [2024-11-28 09:14:22.625362] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:28.781 [2024-11-28 09:14:22.625704] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94720 ] 00:30:28.781 [2024-11-28 09:14:22.777385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.781 [2024-11-28 09:14:22.849089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.041 [2024-11-28 09:14:23.001010] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:29.042 [2024-11-28 09:14:23.001105] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:29.305 [2024-11-28 09:14:23.165282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.305 [2024-11-28 09:14:23.165346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:29.305 [2024-11-28 09:14:23.165373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:29.305 [2024-11-28 09:14:23.165382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.305 [2024-11-28 09:14:23.165451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.305 [2024-11-28 09:14:23.165464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:29.305 [2024-11-28 09:14:23.165478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:29.305 [2024-11-28 09:14:23.165493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.306 [2024-11-28 09:14:23.165516] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:29.306 [2024-11-28 09:14:23.165877] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:29.306 [2024-11-28 09:14:23.165900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.306 [2024-11-28 09:14:23.165910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:29.306 [2024-11-28 09:14:23.165924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:30:29.306 [2024-11-28 09:14:23.165937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.306 [2024-11-28 09:14:23.166552] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:29.306 [2024-11-28 09:14:23.166623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.306 [2024-11-28 09:14:23.166636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:29.306 [2024-11-28 09:14:23.166647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:30:29.306 [2024-11-28 09:14:23.166656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.306 [2024-11-28 09:14:23.166738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.306 [2024-11-28 09:14:23.166754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:29.306 [2024-11-28 09:14:23.166762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:30:29.306 [2024-11-28 09:14:23.166770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.306 [2024-11-28 09:14:23.167085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.306 [2024-11-28 09:14:23.167114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:29.306 [2024-11-28 09:14:23.167124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:30:29.306 [2024-11-28 09:14:23.167137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.306 [2024-11-28 09:14:23.167234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.306 [2024-11-28 09:14:23.167249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:29.306 [2024-11-28 09:14:23.167258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:30:29.306 [2024-11-28 09:14:23.167266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.306 [2024-11-28 09:14:23.167297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.306 [2024-11-28 09:14:23.167307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:29.306 [2024-11-28 09:14:23.167315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:29.306 [2024-11-28 09:14:23.167323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.306 [2024-11-28 09:14:23.167346] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:29.306 [2024-11-28 09:14:23.170211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.306 [2024-11-28 09:14:23.170412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:29.306 [2024-11-28 09:14:23.170438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:30:29.306 [2024-11-28 09:14:23.170446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.306 [2024-11-28 09:14:23.170490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.306 [2024-11-28 09:14:23.170505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:29.306 [2024-11-28 09:14:23.170517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:29.306 [2024-11-28 09:14:23.170524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.306 [2024-11-28 09:14:23.170590] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:29.306 [2024-11-28 09:14:23.170617] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:29.306 [2024-11-28 09:14:23.170664] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:29.306 [2024-11-28 09:14:23.170685] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:29.306 [2024-11-28 09:14:23.170820] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:29.306 [2024-11-28 09:14:23.170834] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:29.306 [2024-11-28 09:14:23.170846] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:29.306 [2024-11-28 09:14:23.170857] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:29.306 [2024-11-28 09:14:23.170867] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:29.306 [2024-11-28 09:14:23.170880] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:29.306 [2024-11-28 09:14:23.170891] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:29.306 [2024-11-28 09:14:23.170900] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:29.306 [2024-11-28 09:14:23.170908] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:29.306 [2024-11-28 09:14:23.170917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.306 [2024-11-28 09:14:23.170926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:29.306 [2024-11-28 09:14:23.170936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:30:29.306 [2024-11-28 09:14:23.170948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.306 [2024-11-28 09:14:23.171039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.306 [2024-11-28 09:14:23.171050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:29.306 [2024-11-28 09:14:23.171065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:29.306 [2024-11-28 09:14:23.171076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.306 [2024-11-28 09:14:23.171179] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:29.306 [2024-11-28 09:14:23.171192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:29.306 [2024-11-28 09:14:23.171201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:29.306 [2024-11-28 09:14:23.171214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.306 [2024-11-28 09:14:23.171224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:29.306 [2024-11-28 09:14:23.171242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:29.306 [2024-11-28 09:14:23.171250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:29.306 [2024-11-28 09:14:23.171259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:29.306 [2024-11-28 09:14:23.171267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:29.306 [2024-11-28 09:14:23.171276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:29.306 [2024-11-28 09:14:23.171284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:29.306 [2024-11-28 09:14:23.171292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:29.306 [2024-11-28 09:14:23.171300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:29.306 [2024-11-28 09:14:23.171312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:29.306 [2024-11-28 09:14:23.171320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:29.306 [2024-11-28 09:14:23.171328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.306 [2024-11-28 09:14:23.171335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:29.306 [2024-11-28 09:14:23.171342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:29.306 [2024-11-28 09:14:23.171349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.306 [2024-11-28 09:14:23.171359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:29.306 [2024-11-28 09:14:23.171366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:29.306 [2024-11-28 09:14:23.171373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.306 [2024-11-28 09:14:23.171380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:29.306 [2024-11-28 09:14:23.171387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:29.306 [2024-11-28 09:14:23.171393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.306 [2024-11-28 09:14:23.171400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:29.306 [2024-11-28 09:14:23.171406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:29.306 [2024-11-28 09:14:23.171413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.306 [2024-11-28 09:14:23.171420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:29.306 [2024-11-28 09:14:23.171428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:29.306 [2024-11-28 09:14:23.171435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.306 [2024-11-28 09:14:23.171442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:29.306 [2024-11-28 09:14:23.171449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:29.306 [2024-11-28 09:14:23.171457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:29.306 [2024-11-28 09:14:23.171463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:29.306 [2024-11-28 09:14:23.171476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:29.306 [2024-11-28 09:14:23.171483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:29.306 [2024-11-28 09:14:23.171489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:29.306 [2024-11-28 09:14:23.171496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:29.306 [2024-11-28 09:14:23.171503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.306 [2024-11-28 09:14:23.171511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:29.306 [2024-11-28 09:14:23.171518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:29.306 [2024-11-28 09:14:23.171524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.306 [2024-11-28 09:14:23.171531] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:29.307 [2024-11-28 09:14:23.171538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:29.307 [2024-11-28 09:14:23.171548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:29.307 [2024-11-28 09:14:23.171556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.307 [2024-11-28 09:14:23.171564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:29.307 [2024-11-28 09:14:23.171572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:29.307 [2024-11-28 09:14:23.171579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:29.307 [2024-11-28 09:14:23.171586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:29.307 [2024-11-28 09:14:23.171596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:29.307 [2024-11-28 09:14:23.171603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:29.307 [2024-11-28 09:14:23.171612] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:29.307 [2024-11-28 09:14:23.171625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:29.307 [2024-11-28 09:14:23.171634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:29.307 [2024-11-28 09:14:23.171641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:29.307 [2024-11-28 09:14:23.171649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:29.307 [2024-11-28 09:14:23.171657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:29.307 [2024-11-28 09:14:23.171665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:29.307 [2024-11-28 09:14:23.171672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:29.307 [2024-11-28 09:14:23.171679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:29.307 [2024-11-28 09:14:23.171687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:29.307 [2024-11-28 09:14:23.171694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:29.307 [2024-11-28 09:14:23.171702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:29.307 [2024-11-28 09:14:23.171710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:29.307 [2024-11-28 09:14:23.171717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:29.307 [2024-11-28 09:14:23.171726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:29.307 [2024-11-28 09:14:23.171734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:29.307 [2024-11-28 09:14:23.171742] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:29.307 [2024-11-28 09:14:23.171754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:29.307 [2024-11-28 09:14:23.171763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:29.307 [2024-11-28 09:14:23.171771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:29.307 [2024-11-28 09:14:23.171778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:29.307 [2024-11-28 09:14:23.171786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:29.307 [2024-11-28 09:14:23.171793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.171819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:29.307 [2024-11-28 09:14:23.171829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:30:29.307 [2024-11-28 09:14:23.171838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.194417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.194487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:29.307 [2024-11-28 09:14:23.194512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.530 ms 00:30:29.307 [2024-11-28 09:14:23.194530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.194660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.194678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:29.307 [2024-11-28 09:14:23.194690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:30:29.307 [2024-11-28 09:14:23.194701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.211141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.211198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:29.307 [2024-11-28 09:14:23.211215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.355 ms 00:30:29.307 [2024-11-28 09:14:23.211225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.211268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.211278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:29.307 [2024-11-28 09:14:23.211294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:29.307 [2024-11-28 09:14:23.211308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.211418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.211431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:29.307 [2024-11-28 09:14:23.211441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:29.307 [2024-11-28 09:14:23.211459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.211601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.211611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:29.307 [2024-11-28 09:14:23.211620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:30:29.307 [2024-11-28 09:14:23.211632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.221578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.221837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:29.307 [2024-11-28 09:14:23.221858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.925 ms 00:30:29.307 [2024-11-28 09:14:23.221883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.222040] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:29.307 [2024-11-28 09:14:23.222055] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:29.307 [2024-11-28 09:14:23.222065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.222082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:29.307 [2024-11-28 09:14:23.222095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:29.307 [2024-11-28 09:14:23.222103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.234569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.234621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:29.307 [2024-11-28 09:14:23.234632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.447 ms 00:30:29.307 [2024-11-28 09:14:23.234644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.234787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.234813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:29.307 [2024-11-28 09:14:23.234829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:30:29.307 [2024-11-28 09:14:23.234836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.234892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.234902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:29.307 [2024-11-28 09:14:23.234911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:29.307 [2024-11-28 09:14:23.234923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.235257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.235269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:29.307 [2024-11-28 09:14:23.235277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:30:29.307 [2024-11-28 09:14:23.235288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.235306] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:29.307 [2024-11-28 09:14:23.235316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.235324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:29.307 [2024-11-28 09:14:23.235338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:29.307 [2024-11-28 09:14:23.235352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.246330] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:29.307 [2024-11-28 09:14:23.246497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.246509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:29.307 [2024-11-28 09:14:23.246519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.126 ms 00:30:29.307 [2024-11-28 09:14:23.246528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.249068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.249237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:29.307 [2024-11-28 09:14:23.249257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.514 ms 00:30:29.307 [2024-11-28 09:14:23.249265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.307 [2024-11-28 09:14:23.249380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.307 [2024-11-28 09:14:23.249391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:29.308 [2024-11-28 09:14:23.249401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:29.308 [2024-11-28 09:14:23.249415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.308 [2024-11-28 09:14:23.249442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.308 [2024-11-28 09:14:23.249455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:29.308 [2024-11-28 09:14:23.249464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:29.308 [2024-11-28 09:14:23.249473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.308 [2024-11-28 09:14:23.249513] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:29.308 [2024-11-28 09:14:23.249528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.308 [2024-11-28 09:14:23.249537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:29.308 [2024-11-28 09:14:23.249574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:29.308 [2024-11-28 09:14:23.249587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.308 [2024-11-28 09:14:23.257424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.308 [2024-11-28 09:14:23.257481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:29.308 [2024-11-28 09:14:23.257501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.814 ms 00:30:29.308 [2024-11-28 09:14:23.257510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.308 [2024-11-28 09:14:23.257619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.308 [2024-11-28 09:14:23.257632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:29.308 [2024-11-28 09:14:23.257641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:30:29.308 [2024-11-28 09:14:23.257652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.308 [2024-11-28 09:14:23.259081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 93.249 ms, result 0 00:30:30.397  [2024-11-28T09:14:25.462Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-28T09:14:26.407Z] Copying: 22/1024 [MB] (10 MBps) [2024-11-28T09:14:27.349Z] Copying: 34/1024 [MB] (12 MBps) [2024-11-28T09:14:28.282Z] Copying: 47/1024 [MB] (13 MBps) [2024-11-28T09:14:29.666Z] Copying: 85/1024 [MB] (37 MBps) [2024-11-28T09:14:30.610Z] Copying: 105/1024 [MB] (19 MBps) [2024-11-28T09:14:31.554Z] Copying: 128/1024 [MB] (23 MBps) [2024-11-28T09:14:32.496Z] Copying: 144/1024 [MB] (15 MBps) [2024-11-28T09:14:33.440Z] Copying: 159/1024 [MB] (14 MBps) [2024-11-28T09:14:34.385Z] Copying: 172/1024 [MB] (12 MBps) [2024-11-28T09:14:35.349Z] Copying: 182/1024 [MB] (10 MBps) [2024-11-28T09:14:36.292Z] Copying: 197/1024 [MB] (14 MBps) [2024-11-28T09:14:37.278Z] Copying: 207/1024 [MB] (10 MBps) [2024-11-28T09:14:38.650Z] Copying: 225/1024 [MB] (18 MBps) [2024-11-28T09:14:39.583Z] Copying: 246/1024 [MB] (21 MBps) [2024-11-28T09:14:40.515Z] Copying: 267/1024 [MB] (20 MBps) [2024-11-28T09:14:41.448Z] Copying: 291/1024 [MB] (24 MBps) [2024-11-28T09:14:42.383Z] Copying: 311/1024 [MB] (20 MBps) [2024-11-28T09:14:43.327Z] Copying: 332/1024 [MB] (21 MBps) [2024-11-28T09:14:44.705Z] Copying: 345/1024 [MB] (12 MBps) [2024-11-28T09:14:45.269Z] Copying: 363/1024 [MB] (17 MBps) [2024-11-28T09:14:46.642Z] Copying: 399/1024 [MB] (36 MBps) [2024-11-28T09:14:47.576Z] Copying: 427/1024 [MB] (28 MBps) [2024-11-28T09:14:48.509Z] Copying: 451/1024 [MB] (23 MBps) [2024-11-28T09:14:49.445Z] Copying: 472/1024 [MB] (21 MBps) [2024-11-28T09:14:50.379Z] Copying: 511/1024 [MB] (38 MBps) [2024-11-28T09:14:51.314Z] Copying: 536/1024 [MB] (25 MBps) [2024-11-28T09:14:52.688Z] Copying: 572/1024 [MB] (36 MBps) [2024-11-28T09:14:53.622Z] Copying: 611/1024 [MB] (38 MBps) [2024-11-28T09:14:54.555Z] Copying: 635/1024 [MB] (24 MBps) [2024-11-28T09:14:55.487Z] Copying: 673/1024 [MB] (38 MBps) [2024-11-28T09:14:56.464Z] Copying: 696/1024 [MB] (23 MBps) [2024-11-28T09:14:57.402Z] Copying: 718/1024 [MB] (21 MBps) [2024-11-28T09:14:58.342Z] Copying: 746/1024 [MB] (27 MBps) [2024-11-28T09:14:59.284Z] Copying: 756/1024 [MB] (10 MBps) [2024-11-28T09:15:00.670Z] Copying: 766/1024 [MB] (10 MBps) [2024-11-28T09:15:01.612Z] Copying: 782/1024 [MB] (15 MBps) [2024-11-28T09:15:02.546Z] Copying: 792/1024 [MB] (10 MBps) [2024-11-28T09:15:03.480Z] Copying: 811/1024 [MB] (18 MBps) [2024-11-28T09:15:04.412Z] Copying: 833/1024 [MB] (22 MBps) [2024-11-28T09:15:05.343Z] Copying: 862/1024 [MB] (29 MBps) [2024-11-28T09:15:06.276Z] Copying: 896/1024 [MB] (34 MBps) [2024-11-28T09:15:07.651Z] Copying: 919/1024 [MB] (23 MBps) [2024-11-28T09:15:08.582Z] Copying: 953/1024 [MB] (34 MBps) [2024-11-28T09:15:09.527Z] Copying: 978/1024 [MB] (24 MBps) [2024-11-28T09:15:10.474Z] Copying: 994/1024 [MB] (16 MBps) [2024-11-28T09:15:11.417Z] Copying: 1028168/1048576 [kB] (10072 kBps) [2024-11-28T09:15:12.352Z] Copying: 1015/1024 [MB] (11 MBps) [2024-11-28T09:15:12.614Z] Copying: 1048408/1048576 [kB] (8720 kBps) [2024-11-28T09:15:12.614Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-28 09:15:12.419587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.494 [2024-11-28 09:15:12.419645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:18.494 [2024-11-28 09:15:12.419660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:18.494 [2024-11-28 09:15:12.419667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.494 [2024-11-28 09:15:12.420993] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:18.494 [2024-11-28 09:15:12.424852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.494 [2024-11-28 09:15:12.424880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:18.494 [2024-11-28 09:15:12.424891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.542 ms 00:31:18.494 [2024-11-28 09:15:12.424898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.494 [2024-11-28 09:15:12.432705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.494 [2024-11-28 09:15:12.432820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:18.494 [2024-11-28 09:15:12.432839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.789 ms 00:31:18.494 [2024-11-28 09:15:12.432847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.494 [2024-11-28 09:15:12.432874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.494 [2024-11-28 09:15:12.432881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:18.494 [2024-11-28 09:15:12.432889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:18.494 [2024-11-28 09:15:12.432899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.494 [2024-11-28 09:15:12.432944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.494 [2024-11-28 09:15:12.432952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:18.494 [2024-11-28 09:15:12.432959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:18.494 [2024-11-28 09:15:12.432967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.494 [2024-11-28 09:15:12.432978] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:18.494 [2024-11-28 09:15:12.432988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125184 / 261120 wr_cnt: 1 state: open 00:31:18.494 [2024-11-28 09:15:12.432997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:18.494 [2024-11-28 09:15:12.433140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:18.495 [2024-11-28 09:15:12.433651] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:18.495 [2024-11-28 09:15:12.433657] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 10168779-6628-4374-9869-80940d4e4796 00:31:18.495 [2024-11-28 09:15:12.433669] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125184 00:31:18.495 [2024-11-28 09:15:12.433678] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 125216 00:31:18.495 [2024-11-28 09:15:12.433684] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125184 00:31:18.495 [2024-11-28 09:15:12.433691] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:31:18.495 [2024-11-28 09:15:12.433696] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:18.495 [2024-11-28 09:15:12.433703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:18.495 [2024-11-28 09:15:12.433712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:18.495 [2024-11-28 09:15:12.433718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:18.495 [2024-11-28 09:15:12.433727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:18.495 [2024-11-28 09:15:12.433733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.495 [2024-11-28 09:15:12.433742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:18.495 [2024-11-28 09:15:12.433751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:31:18.495 [2024-11-28 09:15:12.433758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.495 [2024-11-28 09:15:12.435497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.495 [2024-11-28 09:15:12.435518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:18.496 [2024-11-28 09:15:12.435526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:31:18.496 [2024-11-28 09:15:12.435533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.496 [2024-11-28 09:15:12.435628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:18.496 [2024-11-28 09:15:12.435636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:18.496 [2024-11-28 09:15:12.435644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:31:18.496 [2024-11-28 09:15:12.435650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.496 [2024-11-28 09:15:12.440768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.496 [2024-11-28 09:15:12.440804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:18.496 [2024-11-28 09:15:12.440816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.496 [2024-11-28 09:15:12.440822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.496 [2024-11-28 09:15:12.440860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.496 [2024-11-28 09:15:12.440870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:18.496 [2024-11-28 09:15:12.440877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.496 [2024-11-28 09:15:12.440883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.496 [2024-11-28 09:15:12.440918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.496 [2024-11-28 09:15:12.440926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:18.496 [2024-11-28 09:15:12.440933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.496 [2024-11-28 09:15:12.440942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.496 [2024-11-28 09:15:12.440955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.496 [2024-11-28 09:15:12.440962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:18.496 [2024-11-28 09:15:12.440968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.496 [2024-11-28 09:15:12.440974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.496 [2024-11-28 09:15:12.451461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.496 [2024-11-28 09:15:12.451600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:18.496 [2024-11-28 09:15:12.451615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.496 [2024-11-28 09:15:12.451630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.496 [2024-11-28 09:15:12.460592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.496 [2024-11-28 09:15:12.460626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:18.496 [2024-11-28 09:15:12.460641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.496 [2024-11-28 09:15:12.460649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.496 [2024-11-28 09:15:12.460691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.496 [2024-11-28 09:15:12.460699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:18.496 [2024-11-28 09:15:12.460706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.496 [2024-11-28 09:15:12.460712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.496 [2024-11-28 09:15:12.460737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.496 [2024-11-28 09:15:12.460745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:18.496 [2024-11-28 09:15:12.460751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.496 [2024-11-28 09:15:12.460757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.496 [2024-11-28 09:15:12.460812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.496 [2024-11-28 09:15:12.460821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:18.496 [2024-11-28 09:15:12.460828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.496 [2024-11-28 09:15:12.460834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.496 [2024-11-28 09:15:12.460854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.496 [2024-11-28 09:15:12.460871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:18.496 [2024-11-28 09:15:12.460879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.496 [2024-11-28 09:15:12.460885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.496 [2024-11-28 09:15:12.460919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.496 [2024-11-28 09:15:12.460927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:18.496 [2024-11-28 09:15:12.460933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.496 [2024-11-28 09:15:12.460940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.496 [2024-11-28 09:15:12.460985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:18.496 [2024-11-28 09:15:12.460994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:18.496 [2024-11-28 09:15:12.461004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:18.496 [2024-11-28 09:15:12.461011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:18.496 [2024-11-28 09:15:12.461116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 43.078 ms, result 0 00:31:19.437 00:31:19.437 00:31:19.437 09:15:13 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:19.437 [2024-11-28 09:15:13.278236] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:19.437 [2024-11-28 09:15:13.278537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95234 ] 00:31:19.437 [2024-11-28 09:15:13.428311] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.437 [2024-11-28 09:15:13.480566] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.698 [2024-11-28 09:15:13.580022] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:19.698 [2024-11-28 09:15:13.580076] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:19.698 [2024-11-28 09:15:13.734925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.698 [2024-11-28 09:15:13.734959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:19.698 [2024-11-28 09:15:13.734973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:19.698 [2024-11-28 09:15:13.734979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.698 [2024-11-28 09:15:13.735024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.698 [2024-11-28 09:15:13.735033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:19.698 [2024-11-28 09:15:13.735041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:19.698 [2024-11-28 09:15:13.735050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.698 [2024-11-28 09:15:13.735063] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:19.698 [2024-11-28 09:15:13.735252] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:19.698 [2024-11-28 09:15:13.735265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.698 [2024-11-28 09:15:13.735271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:19.698 [2024-11-28 09:15:13.735278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:31:19.698 [2024-11-28 09:15:13.735285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.698 [2024-11-28 09:15:13.735474] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:19.698 [2024-11-28 09:15:13.735492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.698 [2024-11-28 09:15:13.735499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:19.698 [2024-11-28 09:15:13.735510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:19.698 [2024-11-28 09:15:13.735520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.698 [2024-11-28 09:15:13.735563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.698 [2024-11-28 09:15:13.735573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:19.698 [2024-11-28 09:15:13.735580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:19.698 [2024-11-28 09:15:13.735586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.698 [2024-11-28 09:15:13.735769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.698 [2024-11-28 09:15:13.735778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:19.698 [2024-11-28 09:15:13.735784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:31:19.698 [2024-11-28 09:15:13.735790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.698 [2024-11-28 09:15:13.735898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.698 [2024-11-28 09:15:13.735910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:19.698 [2024-11-28 09:15:13.735916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:19.698 [2024-11-28 09:15:13.735922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.698 [2024-11-28 09:15:13.735945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.698 [2024-11-28 09:15:13.735955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:19.698 [2024-11-28 09:15:13.735961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:19.698 [2024-11-28 09:15:13.735967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.698 [2024-11-28 09:15:13.735981] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:19.698 [2024-11-28 09:15:13.737584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.698 [2024-11-28 09:15:13.737715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:19.698 [2024-11-28 09:15:13.737727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:31:19.698 [2024-11-28 09:15:13.737733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.698 [2024-11-28 09:15:13.737763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.698 [2024-11-28 09:15:13.737771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:19.698 [2024-11-28 09:15:13.737782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:19.698 [2024-11-28 09:15:13.737788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.698 [2024-11-28 09:15:13.737818] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:19.698 [2024-11-28 09:15:13.737835] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:19.698 [2024-11-28 09:15:13.737866] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:19.698 [2024-11-28 09:15:13.737885] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:19.698 [2024-11-28 09:15:13.737967] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:19.698 [2024-11-28 09:15:13.737979] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:19.698 [2024-11-28 09:15:13.737988] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:19.698 [2024-11-28 09:15:13.737996] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:19.698 [2024-11-28 09:15:13.738003] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:19.698 [2024-11-28 09:15:13.738009] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:19.698 [2024-11-28 09:15:13.738017] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:19.698 [2024-11-28 09:15:13.738022] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:19.698 [2024-11-28 09:15:13.738028] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:19.698 [2024-11-28 09:15:13.738036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.698 [2024-11-28 09:15:13.738042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:19.698 [2024-11-28 09:15:13.738048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:31:19.698 [2024-11-28 09:15:13.738053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.698 [2024-11-28 09:15:13.738116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.698 [2024-11-28 09:15:13.738125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:19.698 [2024-11-28 09:15:13.738131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:19.698 [2024-11-28 09:15:13.738138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.698 [2024-11-28 09:15:13.738216] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:19.698 [2024-11-28 09:15:13.738225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:19.698 [2024-11-28 09:15:13.738232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:19.698 [2024-11-28 09:15:13.738238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.698 [2024-11-28 09:15:13.738245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:19.698 [2024-11-28 09:15:13.738257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:19.698 [2024-11-28 09:15:13.738262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:19.698 [2024-11-28 09:15:13.738268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:19.698 [2024-11-28 09:15:13.738277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:19.698 [2024-11-28 09:15:13.738283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:19.698 [2024-11-28 09:15:13.738289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:19.698 [2024-11-28 09:15:13.738295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:19.698 [2024-11-28 09:15:13.738302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:19.699 [2024-11-28 09:15:13.738311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:19.699 [2024-11-28 09:15:13.738318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:19.699 [2024-11-28 09:15:13.738323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.699 [2024-11-28 09:15:13.738330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:19.699 [2024-11-28 09:15:13.738336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:19.699 [2024-11-28 09:15:13.738341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.699 [2024-11-28 09:15:13.738347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:19.699 [2024-11-28 09:15:13.738353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:19.699 [2024-11-28 09:15:13.738359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.699 [2024-11-28 09:15:13.738366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:19.699 [2024-11-28 09:15:13.738372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:19.699 [2024-11-28 09:15:13.738378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.699 [2024-11-28 09:15:13.738384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:19.699 [2024-11-28 09:15:13.738389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:19.699 [2024-11-28 09:15:13.738395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.699 [2024-11-28 09:15:13.738400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:19.699 [2024-11-28 09:15:13.738407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:19.699 [2024-11-28 09:15:13.738413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.699 [2024-11-28 09:15:13.738419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:19.699 [2024-11-28 09:15:13.738425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:19.699 [2024-11-28 09:15:13.738430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:19.699 [2024-11-28 09:15:13.738436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:19.699 [2024-11-28 09:15:13.738442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:19.699 [2024-11-28 09:15:13.738447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:19.699 [2024-11-28 09:15:13.738453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:19.699 [2024-11-28 09:15:13.738460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:19.699 [2024-11-28 09:15:13.738465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.699 [2024-11-28 09:15:13.738472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:19.699 [2024-11-28 09:15:13.738478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:19.699 [2024-11-28 09:15:13.738484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.699 [2024-11-28 09:15:13.738490] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:19.699 [2024-11-28 09:15:13.738497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:19.699 [2024-11-28 09:15:13.738506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:19.699 [2024-11-28 09:15:13.738512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.699 [2024-11-28 09:15:13.738518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:19.699 [2024-11-28 09:15:13.738525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:19.699 [2024-11-28 09:15:13.738531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:19.699 [2024-11-28 09:15:13.738537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:19.699 [2024-11-28 09:15:13.738543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:19.699 [2024-11-28 09:15:13.738549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:19.699 [2024-11-28 09:15:13.738556] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:19.699 [2024-11-28 09:15:13.738568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:19.699 [2024-11-28 09:15:13.738575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:19.699 [2024-11-28 09:15:13.738583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:19.699 [2024-11-28 09:15:13.738589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:19.699 [2024-11-28 09:15:13.738596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:19.699 [2024-11-28 09:15:13.738602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:19.699 [2024-11-28 09:15:13.738609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:19.699 [2024-11-28 09:15:13.738617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:19.699 [2024-11-28 09:15:13.738623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:19.699 [2024-11-28 09:15:13.738629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:19.699 [2024-11-28 09:15:13.738635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:19.699 [2024-11-28 09:15:13.738641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:19.699 [2024-11-28 09:15:13.738647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:19.699 [2024-11-28 09:15:13.738654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:19.699 [2024-11-28 09:15:13.738660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:19.699 [2024-11-28 09:15:13.738666] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:19.699 [2024-11-28 09:15:13.738673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:19.699 [2024-11-28 09:15:13.738679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:19.699 [2024-11-28 09:15:13.738685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:19.699 [2024-11-28 09:15:13.738691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:19.699 [2024-11-28 09:15:13.738696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:19.699 [2024-11-28 09:15:13.738702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.699 [2024-11-28 09:15:13.738707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:19.699 [2024-11-28 09:15:13.738714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:31:19.699 [2024-11-28 09:15:13.738719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.699 [2024-11-28 09:15:13.754254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.699 [2024-11-28 09:15:13.754287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:19.699 [2024-11-28 09:15:13.754298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.491 ms 00:31:19.699 [2024-11-28 09:15:13.754307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.699 [2024-11-28 09:15:13.754373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.699 [2024-11-28 09:15:13.754385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:19.699 [2024-11-28 09:15:13.754392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:19.699 [2024-11-28 09:15:13.754397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.699 [2024-11-28 09:15:13.764926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.699 [2024-11-28 09:15:13.764967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:19.699 [2024-11-28 09:15:13.764983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.486 ms 00:31:19.699 [2024-11-28 09:15:13.764992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.699 [2024-11-28 09:15:13.765029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.699 [2024-11-28 09:15:13.765040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:19.699 [2024-11-28 09:15:13.765049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:19.699 [2024-11-28 09:15:13.765058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.699 [2024-11-28 09:15:13.765168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.765181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:19.700 [2024-11-28 09:15:13.765192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:19.700 [2024-11-28 09:15:13.765208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.765354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.765365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:19.700 [2024-11-28 09:15:13.765381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:31:19.700 [2024-11-28 09:15:13.765390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.771369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.771395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:19.700 [2024-11-28 09:15:13.771402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.957 ms 00:31:19.700 [2024-11-28 09:15:13.771412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.771492] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:19.700 [2024-11-28 09:15:13.771501] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:19.700 [2024-11-28 09:15:13.771508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.771515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:19.700 [2024-11-28 09:15:13.771529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:19.700 [2024-11-28 09:15:13.771535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.780785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.780815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:19.700 [2024-11-28 09:15:13.780823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.238 ms 00:31:19.700 [2024-11-28 09:15:13.780829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.780925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.780933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:19.700 [2024-11-28 09:15:13.780939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:31:19.700 [2024-11-28 09:15:13.780945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.780979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.780987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:19.700 [2024-11-28 09:15:13.780997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:19.700 [2024-11-28 09:15:13.781007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.781242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.781257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:19.700 [2024-11-28 09:15:13.781263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:31:19.700 [2024-11-28 09:15:13.781269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.781284] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:19.700 [2024-11-28 09:15:13.781291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.781298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:19.700 [2024-11-28 09:15:13.781309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:19.700 [2024-11-28 09:15:13.781317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.788440] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:19.700 [2024-11-28 09:15:13.788533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.788541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:19.700 [2024-11-28 09:15:13.788552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.205 ms 00:31:19.700 [2024-11-28 09:15:13.788558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.790502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.790523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:19.700 [2024-11-28 09:15:13.790530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.929 ms 00:31:19.700 [2024-11-28 09:15:13.790537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.790583] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:19.700 [2024-11-28 09:15:13.791040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.791051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:19.700 [2024-11-28 09:15:13.791058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:31:19.700 [2024-11-28 09:15:13.791064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.791090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.791098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:19.700 [2024-11-28 09:15:13.791104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:19.700 [2024-11-28 09:15:13.791110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.791146] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:19.700 [2024-11-28 09:15:13.791155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.791161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:19.700 [2024-11-28 09:15:13.791167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:19.700 [2024-11-28 09:15:13.791173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.795505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.795535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:19.700 [2024-11-28 09:15:13.795544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.319 ms 00:31:19.700 [2024-11-28 09:15:13.795554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.795612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.700 [2024-11-28 09:15:13.795619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:19.700 [2024-11-28 09:15:13.795626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:19.700 [2024-11-28 09:15:13.795631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.700 [2024-11-28 09:15:13.796473] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 61.210 ms, result 0 00:31:21.090  [2024-11-28T09:15:16.152Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-28T09:15:17.086Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-28T09:15:18.030Z] Copying: 45/1024 [MB] (23 MBps) [2024-11-28T09:15:18.968Z] Copying: 55/1024 [MB] (10 MBps) [2024-11-28T09:15:20.351Z] Copying: 72/1024 [MB] (16 MBps) [2024-11-28T09:15:21.289Z] Copying: 85/1024 [MB] (12 MBps) [2024-11-28T09:15:22.233Z] Copying: 96/1024 [MB] (11 MBps) [2024-11-28T09:15:23.177Z] Copying: 111/1024 [MB] (14 MBps) [2024-11-28T09:15:24.118Z] Copying: 121/1024 [MB] (10 MBps) [2024-11-28T09:15:25.061Z] Copying: 132/1024 [MB] (10 MBps) [2024-11-28T09:15:26.002Z] Copying: 148/1024 [MB] (16 MBps) [2024-11-28T09:15:26.954Z] Copying: 161/1024 [MB] (12 MBps) [2024-11-28T09:15:27.943Z] Copying: 178/1024 [MB] (17 MBps) [2024-11-28T09:15:29.329Z] Copying: 193/1024 [MB] (15 MBps) [2024-11-28T09:15:30.272Z] Copying: 204/1024 [MB] (11 MBps) [2024-11-28T09:15:31.214Z] Copying: 222/1024 [MB] (18 MBps) [2024-11-28T09:15:32.158Z] Copying: 239/1024 [MB] (17 MBps) [2024-11-28T09:15:33.102Z] Copying: 251/1024 [MB] (11 MBps) [2024-11-28T09:15:34.049Z] Copying: 270/1024 [MB] (18 MBps) [2024-11-28T09:15:34.992Z] Copying: 286/1024 [MB] (15 MBps) [2024-11-28T09:15:36.378Z] Copying: 300/1024 [MB] (14 MBps) [2024-11-28T09:15:36.949Z] Copying: 311/1024 [MB] (10 MBps) [2024-11-28T09:15:38.336Z] Copying: 323/1024 [MB] (11 MBps) [2024-11-28T09:15:39.282Z] Copying: 338/1024 [MB] (15 MBps) [2024-11-28T09:15:40.220Z] Copying: 355/1024 [MB] (16 MBps) [2024-11-28T09:15:41.162Z] Copying: 366/1024 [MB] (10 MBps) [2024-11-28T09:15:42.106Z] Copying: 377/1024 [MB] (10 MBps) [2024-11-28T09:15:43.049Z] Copying: 388/1024 [MB] (11 MBps) [2024-11-28T09:15:43.990Z] Copying: 399/1024 [MB] (11 MBps) [2024-11-28T09:15:45.375Z] Copying: 410/1024 [MB] (10 MBps) [2024-11-28T09:15:45.943Z] Copying: 422/1024 [MB] (11 MBps) [2024-11-28T09:15:47.318Z] Copying: 437/1024 [MB] (15 MBps) [2024-11-28T09:15:48.261Z] Copying: 464/1024 [MB] (26 MBps) [2024-11-28T09:15:49.198Z] Copying: 479/1024 [MB] (15 MBps) [2024-11-28T09:15:50.141Z] Copying: 499/1024 [MB] (19 MBps) [2024-11-28T09:15:51.082Z] Copying: 520/1024 [MB] (21 MBps) [2024-11-28T09:15:52.024Z] Copying: 544/1024 [MB] (23 MBps) [2024-11-28T09:15:52.968Z] Copying: 557/1024 [MB] (13 MBps) [2024-11-28T09:15:54.357Z] Copying: 578/1024 [MB] (20 MBps) [2024-11-28T09:15:55.302Z] Copying: 600/1024 [MB] (22 MBps) [2024-11-28T09:15:56.246Z] Copying: 623/1024 [MB] (22 MBps) [2024-11-28T09:15:57.190Z] Copying: 641/1024 [MB] (17 MBps) [2024-11-28T09:15:58.129Z] Copying: 662/1024 [MB] (21 MBps) [2024-11-28T09:15:59.111Z] Copying: 687/1024 [MB] (24 MBps) [2024-11-28T09:16:00.087Z] Copying: 703/1024 [MB] (16 MBps) [2024-11-28T09:16:01.029Z] Copying: 714/1024 [MB] (10 MBps) [2024-11-28T09:16:01.966Z] Copying: 724/1024 [MB] (10 MBps) [2024-11-28T09:16:03.352Z] Copying: 735/1024 [MB] (10 MBps) [2024-11-28T09:16:04.296Z] Copying: 752/1024 [MB] (16 MBps) [2024-11-28T09:16:05.241Z] Copying: 763/1024 [MB] (11 MBps) [2024-11-28T09:16:06.184Z] Copying: 775/1024 [MB] (11 MBps) [2024-11-28T09:16:07.125Z] Copying: 785/1024 [MB] (10 MBps) [2024-11-28T09:16:08.067Z] Copying: 800/1024 [MB] (15 MBps) [2024-11-28T09:16:09.012Z] Copying: 819/1024 [MB] (19 MBps) [2024-11-28T09:16:09.957Z] Copying: 836/1024 [MB] (16 MBps) [2024-11-28T09:16:11.343Z] Copying: 855/1024 [MB] (19 MBps) [2024-11-28T09:16:12.287Z] Copying: 872/1024 [MB] (16 MBps) [2024-11-28T09:16:13.233Z] Copying: 887/1024 [MB] (15 MBps) [2024-11-28T09:16:14.177Z] Copying: 903/1024 [MB] (15 MBps) [2024-11-28T09:16:15.123Z] Copying: 921/1024 [MB] (17 MBps) [2024-11-28T09:16:16.066Z] Copying: 938/1024 [MB] (17 MBps) [2024-11-28T09:16:17.012Z] Copying: 953/1024 [MB] (15 MBps) [2024-11-28T09:16:17.955Z] Copying: 967/1024 [MB] (13 MBps) [2024-11-28T09:16:19.341Z] Copying: 981/1024 [MB] (13 MBps) [2024-11-28T09:16:20.285Z] Copying: 1001/1024 [MB] (19 MBps) [2024-11-28T09:16:20.549Z] Copying: 1017/1024 [MB] (16 MBps) [2024-11-28T09:16:20.549Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-28 09:16:20.442340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.429 [2024-11-28 09:16:20.442852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:26.429 [2024-11-28 09:16:20.442894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:26.429 [2024-11-28 09:16:20.442918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.429 [2024-11-28 09:16:20.442978] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:26.429 [2024-11-28 09:16:20.444463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.429 [2024-11-28 09:16:20.444523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:26.429 [2024-11-28 09:16:20.444549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:32:26.429 [2024-11-28 09:16:20.444570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.429 [2024-11-28 09:16:20.445218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.429 [2024-11-28 09:16:20.445280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:26.429 [2024-11-28 09:16:20.445306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:32:26.429 [2024-11-28 09:16:20.445327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.429 [2024-11-28 09:16:20.445457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.429 [2024-11-28 09:16:20.445484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:26.429 [2024-11-28 09:16:20.445523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:26.429 [2024-11-28 09:16:20.445546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.429 [2024-11-28 09:16:20.445652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.429 [2024-11-28 09:16:20.445664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:26.429 [2024-11-28 09:16:20.445677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:26.429 [2024-11-28 09:16:20.445687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.429 [2024-11-28 09:16:20.445703] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:26.429 [2024-11-28 09:16:20.445719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:32:26.429 [2024-11-28 09:16:20.445730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.445738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.445747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.445754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.445762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.445771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.445779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.445786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.445794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:26.429 [2024-11-28 09:16:20.446829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.446996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:26.430 [2024-11-28 09:16:20.447301] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:26.430 [2024-11-28 09:16:20.447316] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 10168779-6628-4374-9869-80940d4e4796 00:32:26.430 [2024-11-28 09:16:20.447326] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:32:26.430 [2024-11-28 09:16:20.447335] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 5920 00:32:26.430 [2024-11-28 09:16:20.447344] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 5888 00:32:26.430 [2024-11-28 09:16:20.447354] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0054 00:32:26.430 [2024-11-28 09:16:20.447362] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:26.430 [2024-11-28 09:16:20.447375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:26.430 [2024-11-28 09:16:20.447384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:26.430 [2024-11-28 09:16:20.447391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:26.430 [2024-11-28 09:16:20.447398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:26.430 [2024-11-28 09:16:20.447409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.430 [2024-11-28 09:16:20.447419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:26.430 [2024-11-28 09:16:20.447428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:32:26.430 [2024-11-28 09:16:20.447438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.430 [2024-11-28 09:16:20.450665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.430 [2024-11-28 09:16:20.450716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:26.430 [2024-11-28 09:16:20.450728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.194 ms 00:32:26.430 [2024-11-28 09:16:20.450740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.430 [2024-11-28 09:16:20.450926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.430 [2024-11-28 09:16:20.450943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:26.430 [2024-11-28 09:16:20.450954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:32:26.430 [2024-11-28 09:16:20.450961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.430 [2024-11-28 09:16:20.460388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.430 [2024-11-28 09:16:20.460561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:26.430 [2024-11-28 09:16:20.460630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.430 [2024-11-28 09:16:20.460656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.430 [2024-11-28 09:16:20.460743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.430 [2024-11-28 09:16:20.460767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:26.430 [2024-11-28 09:16:20.460788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.430 [2024-11-28 09:16:20.460876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.430 [2024-11-28 09:16:20.460986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.430 [2024-11-28 09:16:20.461014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:26.430 [2024-11-28 09:16:20.461035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.430 [2024-11-28 09:16:20.461062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.430 [2024-11-28 09:16:20.461093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.430 [2024-11-28 09:16:20.461171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:26.430 [2024-11-28 09:16:20.461204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.430 [2024-11-28 09:16:20.461225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.430 [2024-11-28 09:16:20.481164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.430 [2024-11-28 09:16:20.481409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:26.431 [2024-11-28 09:16:20.481684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.431 [2024-11-28 09:16:20.481752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.431 [2024-11-28 09:16:20.498029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.431 [2024-11-28 09:16:20.498233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:26.431 [2024-11-28 09:16:20.498293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.431 [2024-11-28 09:16:20.498318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.431 [2024-11-28 09:16:20.498401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.431 [2024-11-28 09:16:20.498425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:26.431 [2024-11-28 09:16:20.498447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.431 [2024-11-28 09:16:20.498467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.431 [2024-11-28 09:16:20.498528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.431 [2024-11-28 09:16:20.498551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:26.431 [2024-11-28 09:16:20.498573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.431 [2024-11-28 09:16:20.498627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.431 [2024-11-28 09:16:20.498723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.431 [2024-11-28 09:16:20.498748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:26.431 [2024-11-28 09:16:20.498872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.431 [2024-11-28 09:16:20.498907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.431 [2024-11-28 09:16:20.498964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.431 [2024-11-28 09:16:20.499001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:26.431 [2024-11-28 09:16:20.499067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.431 [2024-11-28 09:16:20.499090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.431 [2024-11-28 09:16:20.499157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.431 [2024-11-28 09:16:20.499186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:26.431 [2024-11-28 09:16:20.499208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.431 [2024-11-28 09:16:20.499283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.431 [2024-11-28 09:16:20.499370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.431 [2024-11-28 09:16:20.499403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:26.431 [2024-11-28 09:16:20.499425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.431 [2024-11-28 09:16:20.499444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.431 [2024-11-28 09:16:20.499632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 57.274 ms, result 0 00:32:26.692 00:32:26.692 00:32:26.954 09:16:20 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:29.503 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 93197 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93197 ']' 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93197 00:32:29.503 Process with pid 93197 is not found 00:32:29.503 Remove shared memory files 00:32:29.503 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (93197) - No such process 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 93197 is not found' 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_10168779-6628-4374-9869-80940d4e4796_band_md /dev/hugepages/ftl_10168779-6628-4374-9869-80940d4e4796_l2p_l1 /dev/hugepages/ftl_10168779-6628-4374-9869-80940d4e4796_l2p_l2 /dev/hugepages/ftl_10168779-6628-4374-9869-80940d4e4796_l2p_l2_ctx /dev/hugepages/ftl_10168779-6628-4374-9869-80940d4e4796_nvc_md /dev/hugepages/ftl_10168779-6628-4374-9869-80940d4e4796_p2l_pool /dev/hugepages/ftl_10168779-6628-4374-9869-80940d4e4796_sb /dev/hugepages/ftl_10168779-6628-4374-9869-80940d4e4796_sb_shm /dev/hugepages/ftl_10168779-6628-4374-9869-80940d4e4796_trim_bitmap /dev/hugepages/ftl_10168779-6628-4374-9869-80940d4e4796_trim_log /dev/hugepages/ftl_10168779-6628-4374-9869-80940d4e4796_trim_md /dev/hugepages/ftl_10168779-6628-4374-9869-80940d4e4796_vmap 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:29.503 09:16:23 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:32:29.503 ************************************ 00:32:29.503 END TEST ftl_restore_fast 00:32:29.504 ************************************ 00:32:29.504 00:32:29.504 real 4m29.183s 00:32:29.504 user 4m17.658s 00:32:29.504 sys 0m11.529s 00:32:29.504 09:16:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:29.504 09:16:23 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:29.504 09:16:23 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:29.504 09:16:23 ftl -- ftl/ftl.sh@14 -- # killprocess 84315 00:32:29.504 Process with pid 84315 is not found 00:32:29.504 09:16:23 ftl -- common/autotest_common.sh@950 -- # '[' -z 84315 ']' 00:32:29.504 09:16:23 ftl -- common/autotest_common.sh@954 -- # kill -0 84315 00:32:29.504 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (84315) - No such process 00:32:29.504 09:16:23 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 84315 is not found' 00:32:29.504 09:16:23 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:29.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.504 09:16:23 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=95974 00:32:29.504 09:16:23 ftl -- ftl/ftl.sh@20 -- # waitforlisten 95974 00:32:29.504 09:16:23 ftl -- common/autotest_common.sh@831 -- # '[' -z 95974 ']' 00:32:29.504 09:16:23 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.504 09:16:23 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:29.504 09:16:23 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:29.504 09:16:23 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.504 09:16:23 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:29.504 09:16:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:29.504 [2024-11-28 09:16:23.406575] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:29.504 [2024-11-28 09:16:23.406881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95974 ] 00:32:29.504 [2024-11-28 09:16:23.554116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.765 [2024-11-28 09:16:23.625486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.338 09:16:24 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:30.338 09:16:24 ftl -- common/autotest_common.sh@864 -- # return 0 00:32:30.338 09:16:24 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:30.600 nvme0n1 00:32:30.600 09:16:24 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:30.600 09:16:24 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:30.600 09:16:24 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:30.862 09:16:24 ftl -- ftl/common.sh@28 -- # stores=f00d4610-f7bc-4d47-8023-01799efa4ceb 00:32:30.862 09:16:24 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:30.862 09:16:24 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f00d4610-f7bc-4d47-8023-01799efa4ceb 00:32:30.862 09:16:24 ftl -- ftl/ftl.sh@23 -- # killprocess 95974 00:32:30.862 09:16:24 ftl -- common/autotest_common.sh@950 -- # '[' -z 95974 ']' 00:32:30.862 09:16:24 ftl -- common/autotest_common.sh@954 -- # kill -0 95974 00:32:30.862 09:16:24 ftl -- common/autotest_common.sh@955 -- # uname 00:32:30.862 09:16:24 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:30.862 09:16:24 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95974 00:32:31.131 killing process with pid 95974 00:32:31.131 09:16:24 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:31.131 09:16:24 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:31.131 09:16:24 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95974' 00:32:31.131 09:16:24 ftl -- common/autotest_common.sh@969 -- # kill 95974 00:32:31.131 09:16:24 ftl -- common/autotest_common.sh@974 -- # wait 95974 00:32:31.398 09:16:25 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:31.660 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:31.660 Waiting for block devices as requested 00:32:31.660 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:31.922 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:31.922 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:31.922 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:37.216 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:37.216 09:16:31 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:37.216 Remove shared memory files 00:32:37.216 09:16:31 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:37.216 09:16:31 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:37.216 09:16:31 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:37.216 09:16:31 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:37.216 09:16:31 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:37.216 09:16:31 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:37.216 ************************************ 00:32:37.216 END TEST ftl 00:32:37.216 ************************************ 00:32:37.216 00:32:37.216 real 17m26.264s 00:32:37.216 user 19m11.508s 00:32:37.216 sys 1m23.370s 00:32:37.216 09:16:31 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:37.216 09:16:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:37.216 09:16:31 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:37.216 09:16:31 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:37.216 09:16:31 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:37.216 09:16:31 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:37.216 09:16:31 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:32:37.216 09:16:31 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:37.216 09:16:31 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:37.216 09:16:31 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:32:37.216 09:16:31 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:32:37.216 09:16:31 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:32:37.216 09:16:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:37.216 09:16:31 -- common/autotest_common.sh@10 -- # set +x 00:32:37.216 09:16:31 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:32:37.216 09:16:31 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:37.216 09:16:31 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:37.216 09:16:31 -- common/autotest_common.sh@10 -- # set +x 00:32:38.604 INFO: APP EXITING 00:32:38.604 INFO: killing all VMs 00:32:38.604 INFO: killing vhost app 00:32:38.604 INFO: EXIT DONE 00:32:38.885 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:39.207 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:39.477 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:39.477 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:39.477 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:39.738 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:40.312 Cleaning 00:32:40.312 Removing: /var/run/dpdk/spdk0/config 00:32:40.312 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:40.312 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:40.312 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:40.312 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:40.312 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:40.312 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:40.312 Removing: /var/run/dpdk/spdk0 00:32:40.312 Removing: /var/run/dpdk/spdk_pid69773 00:32:40.312 Removing: /var/run/dpdk/spdk_pid69931 00:32:40.312 Removing: /var/run/dpdk/spdk_pid70133 00:32:40.312 Removing: /var/run/dpdk/spdk_pid70215 00:32:40.312 Removing: /var/run/dpdk/spdk_pid70243 00:32:40.312 Removing: /var/run/dpdk/spdk_pid70355 00:32:40.312 Removing: /var/run/dpdk/spdk_pid70367 00:32:40.312 Removing: /var/run/dpdk/spdk_pid70550 00:32:40.312 Removing: /var/run/dpdk/spdk_pid70628 00:32:40.312 Removing: /var/run/dpdk/spdk_pid70708 00:32:40.312 Removing: /var/run/dpdk/spdk_pid70803 00:32:40.313 Removing: /var/run/dpdk/spdk_pid70882 00:32:40.313 Removing: /var/run/dpdk/spdk_pid70917 00:32:40.313 Removing: /var/run/dpdk/spdk_pid70954 00:32:40.313 Removing: /var/run/dpdk/spdk_pid71019 00:32:40.313 Removing: /var/run/dpdk/spdk_pid71119 00:32:40.313 Removing: /var/run/dpdk/spdk_pid71539 00:32:40.313 Removing: /var/run/dpdk/spdk_pid71587 00:32:40.313 Removing: /var/run/dpdk/spdk_pid71633 00:32:40.313 Removing: /var/run/dpdk/spdk_pid71649 00:32:40.313 Removing: /var/run/dpdk/spdk_pid71707 00:32:40.313 Removing: /var/run/dpdk/spdk_pid71723 00:32:40.313 Removing: /var/run/dpdk/spdk_pid71781 00:32:40.313 Removing: /var/run/dpdk/spdk_pid71797 00:32:40.313 Removing: /var/run/dpdk/spdk_pid71839 00:32:40.313 Removing: /var/run/dpdk/spdk_pid71857 00:32:40.313 Removing: /var/run/dpdk/spdk_pid71899 00:32:40.313 Removing: /var/run/dpdk/spdk_pid71917 00:32:40.313 Removing: /var/run/dpdk/spdk_pid72044 00:32:40.313 Removing: /var/run/dpdk/spdk_pid72075 00:32:40.313 Removing: /var/run/dpdk/spdk_pid72153 00:32:40.313 Removing: /var/run/dpdk/spdk_pid72314 00:32:40.313 Removing: /var/run/dpdk/spdk_pid72387 00:32:40.313 Removing: /var/run/dpdk/spdk_pid72418 00:32:40.313 Removing: /var/run/dpdk/spdk_pid72842 00:32:40.313 Removing: /var/run/dpdk/spdk_pid72929 00:32:40.313 Removing: /var/run/dpdk/spdk_pid73029 00:32:40.313 Removing: /var/run/dpdk/spdk_pid73060 00:32:40.313 Removing: /var/run/dpdk/spdk_pid73091 00:32:40.313 Removing: /var/run/dpdk/spdk_pid73164 00:32:40.313 Removing: /var/run/dpdk/spdk_pid73791 00:32:40.313 Removing: /var/run/dpdk/spdk_pid73822 00:32:40.313 Removing: /var/run/dpdk/spdk_pid74270 00:32:40.313 Removing: /var/run/dpdk/spdk_pid74363 00:32:40.313 Removing: /var/run/dpdk/spdk_pid74472 00:32:40.313 Removing: /var/run/dpdk/spdk_pid74509 00:32:40.313 Removing: /var/run/dpdk/spdk_pid74534 00:32:40.313 Removing: /var/run/dpdk/spdk_pid74554 00:32:40.313 Removing: /var/run/dpdk/spdk_pid76379 00:32:40.313 Removing: /var/run/dpdk/spdk_pid76494 00:32:40.313 Removing: /var/run/dpdk/spdk_pid76498 00:32:40.313 Removing: /var/run/dpdk/spdk_pid76521 00:32:40.313 Removing: /var/run/dpdk/spdk_pid76560 00:32:40.313 Removing: /var/run/dpdk/spdk_pid76564 00:32:40.313 Removing: /var/run/dpdk/spdk_pid76576 00:32:40.313 Removing: /var/run/dpdk/spdk_pid76616 00:32:40.313 Removing: /var/run/dpdk/spdk_pid76620 00:32:40.313 Removing: /var/run/dpdk/spdk_pid76632 00:32:40.313 Removing: /var/run/dpdk/spdk_pid76671 00:32:40.313 Removing: /var/run/dpdk/spdk_pid76675 00:32:40.313 Removing: /var/run/dpdk/spdk_pid76687 00:32:40.313 Removing: /var/run/dpdk/spdk_pid78048 00:32:40.313 Removing: /var/run/dpdk/spdk_pid78134 00:32:40.313 Removing: /var/run/dpdk/spdk_pid79531 00:32:40.313 Removing: /var/run/dpdk/spdk_pid80901 00:32:40.313 Removing: /var/run/dpdk/spdk_pid80966 00:32:40.313 Removing: /var/run/dpdk/spdk_pid81020 00:32:40.313 Removing: /var/run/dpdk/spdk_pid81074 00:32:40.313 Removing: /var/run/dpdk/spdk_pid81152 00:32:40.313 Removing: /var/run/dpdk/spdk_pid81221 00:32:40.313 Removing: /var/run/dpdk/spdk_pid81367 00:32:40.313 Removing: /var/run/dpdk/spdk_pid81714 00:32:40.313 Removing: /var/run/dpdk/spdk_pid81734 00:32:40.313 Removing: /var/run/dpdk/spdk_pid82176 00:32:40.313 Removing: /var/run/dpdk/spdk_pid82356 00:32:40.313 Removing: /var/run/dpdk/spdk_pid82444 00:32:40.313 Removing: /var/run/dpdk/spdk_pid82548 00:32:40.313 Removing: /var/run/dpdk/spdk_pid82590 00:32:40.313 Removing: /var/run/dpdk/spdk_pid82616 00:32:40.313 Removing: /var/run/dpdk/spdk_pid82901 00:32:40.313 Removing: /var/run/dpdk/spdk_pid82945 00:32:40.313 Removing: /var/run/dpdk/spdk_pid83001 00:32:40.313 Removing: /var/run/dpdk/spdk_pid83371 00:32:40.313 Removing: /var/run/dpdk/spdk_pid83516 00:32:40.313 Removing: /var/run/dpdk/spdk_pid84315 00:32:40.313 Removing: /var/run/dpdk/spdk_pid84425 00:32:40.313 Removing: /var/run/dpdk/spdk_pid84585 00:32:40.313 Removing: /var/run/dpdk/spdk_pid84677 00:32:40.313 Removing: /var/run/dpdk/spdk_pid84974 00:32:40.313 Removing: /var/run/dpdk/spdk_pid85216 00:32:40.313 Removing: /var/run/dpdk/spdk_pid85570 00:32:40.313 Removing: /var/run/dpdk/spdk_pid85735 00:32:40.313 Removing: /var/run/dpdk/spdk_pid85955 00:32:40.313 Removing: /var/run/dpdk/spdk_pid85991 00:32:40.313 Removing: /var/run/dpdk/spdk_pid86226 00:32:40.313 Removing: /var/run/dpdk/spdk_pid86246 00:32:40.313 Removing: /var/run/dpdk/spdk_pid86287 00:32:40.313 Removing: /var/run/dpdk/spdk_pid86579 00:32:40.313 Removing: /var/run/dpdk/spdk_pid86799 00:32:40.313 Removing: /var/run/dpdk/spdk_pid87459 00:32:40.313 Removing: /var/run/dpdk/spdk_pid88170 00:32:40.313 Removing: /var/run/dpdk/spdk_pid88786 00:32:40.313 Removing: /var/run/dpdk/spdk_pid89642 00:32:40.313 Removing: /var/run/dpdk/spdk_pid89784 00:32:40.313 Removing: /var/run/dpdk/spdk_pid89865 00:32:40.313 Removing: /var/run/dpdk/spdk_pid90395 00:32:40.313 Removing: /var/run/dpdk/spdk_pid90443 00:32:40.313 Removing: /var/run/dpdk/spdk_pid91118 00:32:40.313 Removing: /var/run/dpdk/spdk_pid91507 00:32:40.313 Removing: /var/run/dpdk/spdk_pid92273 00:32:40.313 Removing: /var/run/dpdk/spdk_pid92395 00:32:40.575 Removing: /var/run/dpdk/spdk_pid92431 00:32:40.575 Removing: /var/run/dpdk/spdk_pid92484 00:32:40.575 Removing: /var/run/dpdk/spdk_pid92536 00:32:40.575 Removing: /var/run/dpdk/spdk_pid92592 00:32:40.575 Removing: /var/run/dpdk/spdk_pid92770 00:32:40.575 Removing: /var/run/dpdk/spdk_pid92839 00:32:40.575 Removing: /var/run/dpdk/spdk_pid92910 00:32:40.575 Removing: /var/run/dpdk/spdk_pid92972 00:32:40.575 Removing: /var/run/dpdk/spdk_pid93007 00:32:40.575 Removing: /var/run/dpdk/spdk_pid93069 00:32:40.575 Removing: /var/run/dpdk/spdk_pid93197 00:32:40.575 Removing: /var/run/dpdk/spdk_pid93419 00:32:40.575 Removing: /var/run/dpdk/spdk_pid93987 00:32:40.575 Removing: /var/run/dpdk/spdk_pid94720 00:32:40.575 Removing: /var/run/dpdk/spdk_pid95234 00:32:40.575 Removing: /var/run/dpdk/spdk_pid95974 00:32:40.575 Clean 00:32:40.575 09:16:34 -- common/autotest_common.sh@1451 -- # return 0 00:32:40.575 09:16:34 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:32:40.575 09:16:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:40.575 09:16:34 -- common/autotest_common.sh@10 -- # set +x 00:32:40.575 09:16:34 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:32:40.575 09:16:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:40.575 09:16:34 -- common/autotest_common.sh@10 -- # set +x 00:32:40.575 09:16:34 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:40.575 09:16:34 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:40.575 09:16:34 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:40.575 09:16:34 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:32:40.575 09:16:34 -- spdk/autotest.sh@394 -- # hostname 00:32:40.575 09:16:34 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:40.836 geninfo: WARNING: invalid characters removed from testname! 00:33:07.430 09:16:59 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:08.813 09:17:02 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:10.714 09:17:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:12.613 09:17:06 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:14.526 09:17:08 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:17.067 09:17:10 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:19.619 09:17:13 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:19.619 09:17:13 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:33:19.619 09:17:13 -- common/autotest_common.sh@1681 -- $ lcov --version 00:33:19.619 09:17:13 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:33:19.619 09:17:13 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:33:19.619 09:17:13 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:33:19.619 09:17:13 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:33:19.619 09:17:13 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:33:19.619 09:17:13 -- scripts/common.sh@336 -- $ IFS=.-: 00:33:19.619 09:17:13 -- scripts/common.sh@336 -- $ read -ra ver1 00:33:19.619 09:17:13 -- scripts/common.sh@337 -- $ IFS=.-: 00:33:19.619 09:17:13 -- scripts/common.sh@337 -- $ read -ra ver2 00:33:19.619 09:17:13 -- scripts/common.sh@338 -- $ local 'op=<' 00:33:19.619 09:17:13 -- scripts/common.sh@340 -- $ ver1_l=2 00:33:19.619 09:17:13 -- scripts/common.sh@341 -- $ ver2_l=1 00:33:19.619 09:17:13 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:33:19.619 09:17:13 -- scripts/common.sh@344 -- $ case "$op" in 00:33:19.619 09:17:13 -- scripts/common.sh@345 -- $ : 1 00:33:19.619 09:17:13 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:33:19.619 09:17:13 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:19.619 09:17:13 -- scripts/common.sh@365 -- $ decimal 1 00:33:19.619 09:17:13 -- scripts/common.sh@353 -- $ local d=1 00:33:19.619 09:17:13 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:33:19.619 09:17:13 -- scripts/common.sh@355 -- $ echo 1 00:33:19.619 09:17:13 -- scripts/common.sh@365 -- $ ver1[v]=1 00:33:19.619 09:17:13 -- scripts/common.sh@366 -- $ decimal 2 00:33:19.619 09:17:13 -- scripts/common.sh@353 -- $ local d=2 00:33:19.619 09:17:13 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:33:19.619 09:17:13 -- scripts/common.sh@355 -- $ echo 2 00:33:19.619 09:17:13 -- scripts/common.sh@366 -- $ ver2[v]=2 00:33:19.619 09:17:13 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:33:19.619 09:17:13 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:33:19.619 09:17:13 -- scripts/common.sh@368 -- $ return 0 00:33:19.619 09:17:13 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:19.619 09:17:13 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:33:19.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.619 --rc genhtml_branch_coverage=1 00:33:19.619 --rc genhtml_function_coverage=1 00:33:19.619 --rc genhtml_legend=1 00:33:19.619 --rc geninfo_all_blocks=1 00:33:19.619 --rc geninfo_unexecuted_blocks=1 00:33:19.619 00:33:19.619 ' 00:33:19.619 09:17:13 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:33:19.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.619 --rc genhtml_branch_coverage=1 00:33:19.619 --rc genhtml_function_coverage=1 00:33:19.619 --rc genhtml_legend=1 00:33:19.619 --rc geninfo_all_blocks=1 00:33:19.619 --rc geninfo_unexecuted_blocks=1 00:33:19.619 00:33:19.619 ' 00:33:19.619 09:17:13 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:33:19.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.619 --rc genhtml_branch_coverage=1 00:33:19.619 --rc genhtml_function_coverage=1 00:33:19.619 --rc genhtml_legend=1 00:33:19.619 --rc geninfo_all_blocks=1 00:33:19.619 --rc geninfo_unexecuted_blocks=1 00:33:19.619 00:33:19.619 ' 00:33:19.619 09:17:13 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:33:19.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.619 --rc genhtml_branch_coverage=1 00:33:19.619 --rc genhtml_function_coverage=1 00:33:19.619 --rc genhtml_legend=1 00:33:19.619 --rc geninfo_all_blocks=1 00:33:19.619 --rc geninfo_unexecuted_blocks=1 00:33:19.619 00:33:19.619 ' 00:33:19.619 09:17:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:19.619 09:17:13 -- scripts/common.sh@15 -- $ shopt -s extglob 00:33:19.619 09:17:13 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:19.619 09:17:13 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.619 09:17:13 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.619 09:17:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.619 09:17:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.619 09:17:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.619 09:17:13 -- paths/export.sh@5 -- $ export PATH 00:33:19.619 09:17:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.882 09:17:13 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:19.882 09:17:13 -- common/autobuild_common.sh@479 -- $ date +%s 00:33:19.882 09:17:13 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732785433.XXXXXX 00:33:19.882 09:17:13 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732785433.s4EGMZ 00:33:19.882 09:17:13 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:33:19.882 09:17:13 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:33:19.882 09:17:13 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:33:19.882 09:17:13 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:33:19.882 09:17:13 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:19.882 09:17:13 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:19.882 09:17:13 -- common/autobuild_common.sh@495 -- $ get_config_params 00:33:19.882 09:17:13 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:33:19.882 09:17:13 -- common/autotest_common.sh@10 -- $ set +x 00:33:19.882 09:17:13 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:33:19.882 09:17:13 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:33:19.882 09:17:13 -- pm/common@17 -- $ local monitor 00:33:19.882 09:17:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:19.882 09:17:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:19.882 09:17:13 -- pm/common@25 -- $ sleep 1 00:33:19.882 09:17:13 -- pm/common@21 -- $ date +%s 00:33:19.882 09:17:13 -- pm/common@21 -- $ date +%s 00:33:19.882 09:17:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732785433 00:33:19.882 09:17:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732785433 00:33:19.883 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732785433_collect-cpu-load.pm.log 00:33:19.883 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732785433_collect-vmstat.pm.log 00:33:20.827 09:17:14 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:33:20.827 09:17:14 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:33:20.827 09:17:14 -- spdk/autopackage.sh@14 -- $ timing_finish 00:33:20.827 09:17:14 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:20.827 09:17:14 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:20.827 09:17:14 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:20.827 09:17:14 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:20.827 09:17:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:20.827 09:17:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:20.827 09:17:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:20.827 09:17:14 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:20.827 09:17:14 -- pm/common@44 -- $ pid=97659 00:33:20.827 09:17:14 -- pm/common@50 -- $ kill -TERM 97659 00:33:20.827 09:17:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:20.827 09:17:14 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:20.827 09:17:14 -- pm/common@44 -- $ pid=97661 00:33:20.827 09:17:14 -- pm/common@50 -- $ kill -TERM 97661 00:33:20.827 + [[ -n 5769 ]] 00:33:20.827 + sudo kill 5769 00:33:20.838 [Pipeline] } 00:33:20.854 [Pipeline] // timeout 00:33:20.859 [Pipeline] } 00:33:20.873 [Pipeline] // stage 00:33:20.878 [Pipeline] } 00:33:20.892 [Pipeline] // catchError 00:33:20.902 [Pipeline] stage 00:33:20.904 [Pipeline] { (Stop VM) 00:33:20.915 [Pipeline] sh 00:33:21.195 + vagrant halt 00:33:23.735 ==> default: Halting domain... 00:33:30.334 [Pipeline] sh 00:33:30.622 + vagrant destroy -f 00:33:33.163 ==> default: Removing domain... 00:33:34.119 [Pipeline] sh 00:33:34.471 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:34.490 [Pipeline] } 00:33:34.505 [Pipeline] // stage 00:33:34.511 [Pipeline] } 00:33:34.524 [Pipeline] // dir 00:33:34.530 [Pipeline] } 00:33:34.545 [Pipeline] // wrap 00:33:34.551 [Pipeline] } 00:33:34.563 [Pipeline] // catchError 00:33:34.571 [Pipeline] stage 00:33:34.574 [Pipeline] { (Epilogue) 00:33:34.587 [Pipeline] sh 00:33:34.873 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:40.162 [Pipeline] catchError 00:33:40.164 [Pipeline] { 00:33:40.178 [Pipeline] sh 00:33:40.463 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:40.463 Artifacts sizes are good 00:33:40.474 [Pipeline] } 00:33:40.489 [Pipeline] // catchError 00:33:40.501 [Pipeline] archiveArtifacts 00:33:40.509 Archiving artifacts 00:33:40.641 [Pipeline] cleanWs 00:33:40.653 [WS-CLEANUP] Deleting project workspace... 00:33:40.653 [WS-CLEANUP] Deferred wipeout is used... 00:33:40.661 [WS-CLEANUP] done 00:33:40.663 [Pipeline] } 00:33:40.679 [Pipeline] // stage 00:33:40.684 [Pipeline] } 00:33:40.698 [Pipeline] // node 00:33:40.704 [Pipeline] End of Pipeline 00:33:40.747 Finished: SUCCESS